Mario Mamalis
  • About Me
  • Development Posts
  • Development Videos
Category:

Development

Artificial Intelligence

Navigating the AI Landscape

by Mario Mamalis August 10, 2023
written by Mario Mamalis

Artificial Intelligence (AI) is a transformative technology that has the potential to revolutionize our lives.

We all know by now the enormous impact AI will have on our society. Some of us are excited and optimistic about the unlocked potential, and new capabilities we will realize with the proper implementation and absorption of AI into our society. Others are much less optimistic and focus only on the risks.

I am more of an optimist, so my view is that while AI presents many challenges, with foresight, planning, and collaborative effort, society can navigate these changes in a manner that not only safeguards but enhances human lives. The future with AI doesn’t have to be a zero-sum game between machines and humans; it can be a symbiotic relationship where each amplifies the other’s strengths.

In this post, I will focus on the reality of AI and what is presently available to us. In future posts I will dive deeper into specific AI applications.

Current Breakthroughs

In essence, AI involves developing software that mirrors human actions and skills. Some of the  areas where we have seen tangible benefits are:

1. Machine Learning

Often the backbone of AI, it’s the method we use to instruct computer models to make inferences and predictions from data.

Simply put, machines they learn from data. Our daily activities result in the production of enormous amounts of data. Whether it’s the text messages, emails, or social media updates we share, or the photos and videos we capture with our smartphones, we’re constantly churning out vast quantities of information. Beyond that, countless sensors in our homes, vehicles, urban environments, public transportation systems, and industrial zones create even more data.

Data experts harness this immense amount of information to educate machine learning models. These models can then draw predictions and conclusions based on the patterns and associations identified within the data.

Real World Example: Machine Learning for Predicting Rainfall Patterns

  1. Data Collection: Data scientists gather years of meteorological data, which includes variables like temperature, humidity, wind speed, air pressure, and past rainfall measurements from various weather stations and satellites.
  2. Feature Engineering: Not all collected data might be relevant. Hence, it’s important to identify which features (or combinations of features) are the most indicative of an impending rainfall event.
  3. Training the Model: With the relevant features identified, a machine learning model, like a neural network or a decision tree, is trained on a portion of the collected data. The model learns the relationships between the features and the outcomes (e.g., whether it rained the next day).
  4. Validation and Testing: Once trained, the model is tested on a different subset of the data (which it hasn’t seen before) to verify its accuracy in predicting rainfall.
  5. Real-time Predictions: Once the model is adequately trained and validated, it can be used in real-time. For instance, if sensors detect a specific combination of temperature, humidity, and pressure on a particular day, the model might predict a 90% chance of rainfall the next day in a certain region.
  6. Continuous Learning: Weather is dynamic, and patterns may evolve over time due to various reasons, including climate change. Machine learning models can be set up for continuous learning. This means that as new data comes in, the model refines and updates its understanding, ensuring predictions remain accurate.

By utilizing ML in this way, meteorologists can offer more precise and timely warnings about rainfall, helping farmers plan their crops, cities manage potential flooding, and people plan their activities.

2. Anomaly Detection

Anomaly detection, often termed outlier detection, refers to the identification of items, events, or observations that do not conform to the expected pattern in a dataset. In the context of AI, it’s the use of algorithms and models to identify unusual patterns that do not align with expected behavior.

Real World Example: Anomaly Detection in F1 Gearbox Systems

  1. Data Collection: Modern F1 cars are equipped with thousands of sensors that continuously monitor various aspects of the car’s performance, from engine metrics to tire conditions. For the gearbox, these sensors can track parameters like temperature, RPM, gear engagement speed, and vibrations.
  2. Baseline Creation: Data from hundreds of laps is used to establish a ‘baseline’ or ‘normal’ behavior of the gearbox under various conditions – straights, tight turns, heavy acceleration, or deceleration.
  3. Real-time Monitoring: During a race or a practice session, the gearbox’s performance metrics are continuously compared to this baseline. Any deviation from the baseline, be it a sudden temperature spike or unexpected vibration, can be flagged instantly.
  4. Anomaly Detection: Advanced algorithms process this data in real-time to detect anomalies. For instance, if a gearbox typically operates at a specific temperature range during a certain track segment but suddenly registers a temperature that’s significantly higher or lower, the system flags this as an anomaly.
  5. Immediate Action: Once an anomaly is detected, the team receives instant alerts. Depending on the severity and type of anomaly, different actions can be taken. It could range from sending a warning to the driver, planning a pit stop to address the issue, or, in critical situations, advising the driver to retire the car to avoid catastrophic failure or danger.
  6. Post-Race Analysis: After the race, data engineers and technicians can delve deeper into the anomaly data to understand its root cause, ensuring that such issues can be preemptively addressed in future races.

This approach of anomaly detection in F1 not only ensures the optimal performance of the car but also significantly enhances driver safety. An unforeseen failure at the high speeds at which F1 cars operate can be catastrophic, making the quick detection and mitigation of potential issues a top priority.

3. Computer Vision

Computer vision systems utilize machine learning models designed to process visual data from sources like cameras, videos, or pictures. The following are common computer vision tasks:

3.1 Image classification 

This refers to the task of assigning a label to an image based on its visual content. Essentially, it’s about categorizing what the image represents. For instance, given a picture, an image classification system might categorize it as a “cat”, “dog”, “car”, etc. This is typically achieved using deep learning models. The primary objective is to identify the main subject or theme of the image from a predefined set of categories.

3.2 Object detection

This is the process of identifying and locating specific objects within an image or video. Unlike image classification, which assigns a single label to the entire picture, object detection can recognize multiple items in the image and provide bounding boxes around each identified object. Commonly used in tasks like autonomous driving, surveillance, and image retrieval, it often employs deep learning models, to both classify and spatially locate objects within the visual frame.

3.3 Semantic segmentation

This task involves dividing an image into segments where each segment corresponds to a specific object or class category. Instead of just identifying that an object is present (as in object detection) or classifying the image (as in image classification), semantic segmentation classifies each pixel of the image. As a result, it provides a detailed, pixel-level labeling, highlighting the specific regions in an image where each object or class is located. Common applications include self-driving cars (to understand road scenes) and medical imaging (to identify regions of interest).

3.4 Image analysis

This refers to the process of inspecting and interpreting visual data to derive meaningful insights. It involves various techniques that evaluate the features, patterns, and structures within images. By transforming visual content into actionable data, image analysis can be applied across diverse fields, from medical diagnostics to satellite imagery interpretation. Its goal is often to categorize, quantify, or enhance the visual data for further understanding or application. 

3.5 Face detection

This is the task of identifying and locating faces within an image or video frame. It determines the presence and location of faces. Typically, face detection algorithms focus on unique facial features such as eyes, nose, and mouth to differentiate faces from other objects in the image. This technology is foundational for applications like facial recognition, camera autofocus, and various security and social media applications.

4. Optical Character Recognition (OCR)

This is a technology that converts different types of documents, such as scanned paper documents, PDF files, or images captured by a digital camera, into editable and searchable data. By recognizing the characters present in the visual data, OCR enables the transformation of static, image-based content into dynamic text that can be edited, formatted, indexed, or searched. It’s commonly used in data entry automation, digitizing printed books, and extracting information from images.

5. Natural Language Processing

Natural language processing (NLP) is a subfield of AI focused on developing software capable of comprehending and generating human language, whether written or spoken.

With NLP, it’s possible to develop applications that can:

  • Examine and deduce meaning from text in documents, emails, and other mediums.
  • Recognize spoken words and produce spoken feedback.
  • Instantly convert phrases between languages, whether they’re spoken or written.
  • Understand instructions and decide on the relevant responses.

Real World Example: Customer Service Chatbots in E-commerce Websites

Problem Statement

Online retailers often have a vast number of customers visiting their websites, many of whom have queries about products, services, shipping, returns, etc. Addressing these in real-time with human agents for each customer can be costly and time-consuming.

NLP Solution

E-commerce platforms deploy chatbots equipped with NLP capabilities. When a customer types in a query, such as “What is the return policy for electronics?”, the NLP system in the chatbot interprets the question’s intent.

Functionality

  1. Tokenization: Breaks the input text into individual words or tokens.
  2. Intent Recognition: Understands the main purpose of the user’s message, i.e., getting information about the return policy for electronics.
  3. Entity Recognition: Identifies key components in the text, e.g., “electronics” as the product category.
  4. Response Generation: Based on the identified intent and entities, the chatbot retrieves the relevant information from its database (in this case, the return policy for electronics) and crafts a coherent response.
  5. Feedback Loop: If the chatbot’s answer is not satisfactory, the user’s feedback can be utilized to train and improve the NLP model, making the chatbot more efficient over time.

Benefits

  • 24/7 Customer Support: The chatbot can operate round the clock, ensuring customers from different time zones get real-time assistance.
  • Cost Efficiency: Reduces the need for a large customer service team.
  • Consistency: Provides uniform information to all customers.
  • This application of NLP has revolutionized the way businesses interact with their customers online, offering quick, consistent, and efficient responses.

6. Knowledge Mining

Knowledge mining involves extracting valuable insights, patterns, and knowledge from vast and often unstructured data sources. It combines techniques from data mining, machine learning, and big data analytics to transform raw information into a structured and understandable format. The goal is to discover hidden relationships, trends, and patterns that can inform decision-making, drive innovation, and provide a deeper understanding of complex subjects. Knowledge mining is particularly valuable in areas with huge datasets, like research, healthcare, and business analytics, where it aids in converting vast data into actionable intelligence.

Risks and Challenges

Artificial Intelligence holds immense potential to bring positive change to our world, but its use demands careful oversight and ethical considerations. Here are some potential shortcomings:

  • Bias influencing outcomes: For example, a lending model shows discrimination towards a particular gender due to skewed training data.
  • Unintended harm from errors: For example, a self-driving car has a system malfunction, leading to an accident.
  • Potential data breaches: For example, a bot designed for medical diagnoses uses confidential patient records stored without adequate security.
  • Inclusive design shortcomings: For example, a smart home device fails to offer audio feedback, leaving visually challenged users unsupported.
  • Need for transparency and trust: For example, a finance AI tool suggests investment strategies; but how does it determine them?
  • Accountability for AI decisions: For example a faulty facial recognition system results in a wrongful conviction; who is held accountable?

Social Implications

The impact of AI on jobs and the skills landscape is profound, complex, and multifaceted. This seems to be one of the biggest fears people have about AI. Let’s delve deeper into this.

  • Job Displacement: Repetitive, manual, and rule-based tasks are more prone to automation. This can impact sectors like manufacturing, customer service, and basic data entry roles.
  • Job Creation: Historically, technological advancements have given rise to new jobs. Similarly, AI will create new roles that we might not even be able to envision now. Positions in AI ethics, AI system training, and AI system maintenance are examples of new job avenues.
  • Job Transformation: Some jobs won’t disappear but will transform. For instance, radiologists might spend less time analyzing X-rays (as AI can do that) and more time consulting with patients or other doctors based on AI’s findings.
  • Technical Skills: There will be an increased demand for individuals who understand AI, data science, machine learning, and related technologies.
  • Soft Skills: Emotional intelligence, creativity, critical thinking, and complex problem-solving will become even more invaluable. As AI systems handle more data-oriented tasks, uniquely human traits will become more prominent in the job market.
  • Adaptability: The pace of change means that the ability to learn and adapt is crucial. Lifelong learning and the readiness to acquire new skills will be vital.
  • Interdisciplinary Knowledge: Combining AI with domain-specific knowledge, whether it’s in arts, medicine, or finance, can lead to groundbreaking applications.

Ideas to Address Negative Impact

  • Education & Training: Governments and private institutions need to focus on retraining programs to help the workforce transition. This includes updating educational curricula to reflect the new skills demand and offering adult education initiatives focused on AI and technology.
  • Safety Nets: Support for those who lose jobs due to automation is vital. This could be in the form of unemployment benefits, retraining programs, or even discussions around universal basic income.
  • Ethical Considerations: Businesses should be encouraged to deploy AI responsibly, understanding its societal impact, and not just the bottom line. Ethical guidelines for AI application can help.
  • Inclusive Development: AI tools should be developed with input from a diverse group to ensure they address a broad range of needs and avoid built-in biases.
  • Local Solutions: AI’s impact might differ based on the region, economy, and culture. Tailored local strategies can better address specific challenges and opportunities.

Responsible AI – The Six Principles

Artificial Intelligence is not just a tool; it has become an integral part of our daily lives, reshaping industries and altering the fabric of society. With its increasing influence comes a pressing need for Responsible AI. But what exactly does this mean?

Responsible AI encompasses the practice of designing, developing, deploying, and managing AI in a manner that is transparent, ethical, and aligned with societal values and norms. It’s about ensuring that as AI systems make decisions, they do so in ways that are understandable, fair, and beneficial, while actively mitigating unintended consequences and harms.

Fair

AI systems ought to ensure equal treatment for everyone. Let’s say you design a machine learning model for a home loan approval process. The model’s predictions on loan approvals or rejections should be unbiased. It’s crucial that the model doesn’t favor or discriminate against groups based on gender, ethnicity, or any other criteria that could unjustly benefit or hinder specific applicant groups.

Safe and Reliable

AI systems must function with both precision and security. Imagine an AI-infused drone system for package deliveries or a machine learning algorithm assisting in air traffic control. Inaccuracies in these systems can have profound consequences, potentially jeopardizing safety.

It’s essential that AI-based software undergo meticulous testing and stringent deployment protocols to guarantee their reliability before they’re introduced to real-world scenarios.

Secure

AI systems ought to prioritize security and uphold privacy standards. AI systems, particularly their underlying machine learning models, draw upon vast data sets that might encompass sensitive personal information. The obligation to protect privacy doesn’t end once the models are developed and operational. As these systems continually utilize fresh data for predictions or decisions, both the data itself and the resultant choices can have associated privacy and security implications.

Inclusive

AI systems should be inclusive and resonate with all individuals. It’s vital that the benefits of AI extend across all societal divisions, be it physical abilities, gender, sexual orientation, ethnicity, or any other characteristics.

For example: An AI-driven voice recognition software shouldn’t just understand accents from major world languages but should also effectively recognize and interpret dialects and variations, ensuring people from remote regions or minority linguistic groups aren’t left out.

Transparent

AI systems should be transparent and comprehensible. Users ought to be well-informed about the system’s intent, its operational mechanisms, and any potential constraints.

For example: If a health app uses AI to assess the likelihood of a certain medical condition based on input symptoms, users should be informed about the sources of its medical data and the accuracy rate of its predictions.

Responsible

Responsibility for AI systems rests with their creators. Those designing and implementing AI solutions should adhere to a well-defined set of ethical and legal rules, ensuring the technology conforms to established standards.

For example: If a company designs an AI tool for recruitment, the architects should ensure it adheres to employment laws and anti-discrimination guidelines. If the tool inadvertently favors a particular age group or ethnicity, the creators must rectify the issue and ensure fairness in the recruitment process.

Final Thoughts

Artificial Intelligence presents transformative solutions to many challenges. AI systems possess the capacity to emulate human behaviors, interpret their environment, and take actions that were once thought of as science fiction.

However, such profound capabilities also carry significant responsibilities. As architects of AI innovations, we have a duty to ensure these technologies benefit the masses without unintentionally disadvantaging any individual or community.

August 10, 2023 1 comment
0 FacebookTwitterPinterestEmail
DevelopmentSuccess Stories

Case Study: Transformative Digital Solutions with Xebia|Xpirit

by Mario Mamalis July 24, 2023
written by Mario Mamalis

Our consultants were recently engaged in a project by a leading company in the life insurance industry. The mission was to develop a new API that would be used as an example of best practices and design patterns for all other APIs developed at the client. This was a unique opportunity for Xebia|Xpirit to actualize our core corporate values: building engineering cultures, sharing knowledge, quality without compromise, and fostering long-lasting client relationships.

Our client, a well-established entity, faced challenges due to its development team’s unfamiliarity with the latest technological trends. Sensing the need for strategic upgrades to stay competitive in the digital era, they called upon the expertise of Xebia|Xpirit.

Understanding the Challenge

The client had a robust team of developers who were eager to learn but found it challenging to stay abreast of the latest technological trends and best practices. While they had the experience and eagerness to learn, they struggled with the dynamic evolution of the tech industry.

We had a twofold challenge: to build a modern, scalable, and efficient API and to enrich the client’s development team’s knowledge base and skill set, bringing them in line with the latest industry standards.

Embarking on a Solution-Driven Path

In an era where change is the only constant, our team of three experienced consultants worked closely with the client to first understand their specific needs, strengths, and areas for growth. To set a solid foundation for scalable and maintainable future growth, the API was built using the principles of Clean Architecture, a set of best practices that ensure code is easy to understand, modify, and test.

To capitalize on the eagerness of the client’s team, every step of the API development process became a teaching moment. The knowledge sharing was done generously, focusing not only on the ‘how,’ but also the ‘why’ behind specific methods, technologies, and design patterns. This hands-on approach imbued the client’s development team with the confidence and knowledge to not just understand, but also to implement and maintain the systems on their own in the future.

The Azure Pivot and Quality Commitment

Our commitment to delivering uncompromising quality and the best fit solutions led the team to recommend a switch from a planned Azure deployment using Azure Kubernetes Services to Azure App Services. This pivot was not just a technological change, but a strategic decision that was influenced by the understanding of the client’s team capabilities and the specific technical requirements.

Instead of choosing a technology based on its popularity, our consultants recommended the best solution after meticulous evaluation of the client’s needs and capabilities as well as the application functional and technical requirements. This decision was backed by in-depth explanations and walkthroughs of potential benefits, ensuring the client stakeholders were not just accepting, but comprehending and championing this shift. This recommendation stood as a testament to Xebia|Xpirit’s adherence to our mantra: “Quality without Compromise.”

Outcomes and Beyond

The collaborative journey between Xebia|Xpirit and our client resulted in an effective, scalable, and maintainable API that will serve as a model for all future API builds. Beyond this technical success, the engagement was transformative for the client’s development team, who found themselves armed with contemporary knowledge, upgraded skills, and a newly found enthusiasm for embracing the changing technology landscape.

Thrilled with the outcome of the engagement, the client didn’t wait long to begin another project with Xebia|Xpirit. We are now gearing up to prepare them for an Azure Cloud Migration utilizing Azure Landing Zones, a clear demonstration of a burgeoning, long-lasting relationship founded on trust, respect, and shared success.

Conclusion

This engagement is a compelling testimony of how we at Xebia|Xpirit live our core values. We focus on more than just providing technological solutions; we believe in building engineering cultures that nurture continuous learning, sharing knowledge generously, ensuring quality without compromise, and building long-lasting relationships.

Our collaboration with the client validated our unique approach, and the success we achieved together reaffirms our role as a trusted partner for businesses seeking to leverage technology for growth and competitiveness. This success story reflects our unwavering commitment to be more than just consultants; we are educators, collaborators, and partners, dedicated to ensuring the success of our clients in the digital age.

July 24, 2023 0 comment
0 FacebookTwitterPinterestEmail
Cloud Architecture

Azure Landing Zones

by Mario Mamalis July 21, 2023
written by Mario Mamalis

As the digital landscape evolves, businesses are increasingly turning to cloud solutions for their scalability, flexibility, and cost-efficiency. Among the various cloud platforms available, Microsoft Azure has emerged as a top choice for enterprises looking to transform their operations and harness the full potential of the cloud. However, successfully migrating applications to Azure requires meticulous planning and execution. One essential aspect that can significantly enhance the migration process is the proper implementation of Azure Landing Zones. In this blog post, we’ll explore the benefits of adopting Azure Landing Zones and how they can expedite the journey of a company migrating its applications to the Azure Cloud.

What are Azure Landing Zones?

Azure Landing Zones are a set of best practices, guidelines, and pre-configured templates designed to establish a foundation for smooth, secure, and scalable cloud adoption. Think of them as a blueprint for creating a well-structured environment in Azure. With Azure Landing Zones, companies can avoid potential pitfalls and ensure that their cloud resources are organized, compliant, and aligned with industry standards from the outset.

Benefits of Proper Azure Landing Zones Implementation

Let’s explore the key benefits of an Azure Landing Zones implementation:

Accelerated Cloud Adoption

One of the primary advantages of Azure Landing Zones is the rapid acceleration of the cloud adoption process. By providing a structured framework and pre-configured templates, organizations can skip time-consuming manual setups and start their cloud journey quickly. This allows the company to focus on core business objectives, reduce deployment cycles, and derive value from Azure’s services sooner.

Enhanced Security and Compliance

Security is a top concern when migrating applications to the cloud. Azure Landing Zones help address these concerns by providing a solid foundation for security and compliance best practices. With predefined security policies and controls, organizations can ensure consistent security configurations across their cloud environment. This includes identity and access management, network security, data protection, and compliance with industry regulations.

Standardized Governance

Maintaining governance and control in a cloud environment can be complex, especially as the infrastructure scales. Azure Landing Zones establish standardized governance models, enabling a centralized approach to managing resources, access permissions, and cost controls. By adopting these predefined governance policies, companies can avoid shadow IT and maintain full visibility and control over their cloud assets.

Improved Cost Management

Proper implementation of Azure Landing Zones allows organizations to optimize cloud costs effectively. By following best practices for resource organization and using Azure’s cost management tools, businesses can track their cloud spending, identify cost-saving opportunities, and avoid unexpected expenses.

Increased Scalability and Flexibility

Azure Landing Zones are designed to accommodate future growth and changing business requirements seamlessly. By setting up a scalable and flexible foundation, companies can expand their cloud infrastructure to meet the evolving needs of their applications without encountering bottlenecks or architectural constraints.

Streamlined Collaboration

For companies with multiple teams or departments involved in the migration process, Azure Landing Zones provide a standardized framework that fosters collaboration and communication. This shared approach ensures that everyone follows the same guidelines, leading to consistent results and a smoother migration experience.

Azure Landing Zone Architecture

The architecture of an Azure landing zone is designed to be flexible and adaptable, catering to various deployment requirements. Its modular and scalable nature enables consistent application of configurations and controls across all subscriptions. By utilizing modules, specific components of the Azure landing zone can be easily deployed and adjusted as your needs evolve over time.
 
The conceptual architecture of the Azure landing zone, depicted below, serves as a recommended blueprint, providing an opinionated and target design for your cloud environment. However, it should be viewed as a starting point rather than a rigid framework. It is essential to tailor the architecture to align with your organization’s unique needs, ensuring that the Azure landing zone perfectly fits your requirements.
 
Conceptual Architecture Diagram (Click the image below to expand it).
 

Landing Zone Types

An Azure Landing Zone can be either a Platform Landing Zone or an Application Landing Zone. A more detailed explanation of their respective functions is valuable to gain a comprehensive understanding of their roles in cloud architecture.

Platform Landing Zones

Platform Landing Zones, also known as Foundational Landing Zones, provide the core infrastructure and services required for hosting applications in Azure. They are the initial building blocks that establish a well-structured and governed foundation for the entire cloud environment.
 
The primary focus of Platform Landing Zones is on creating a robust and scalable infrastructure to host applications. They address common requirements, such as identity and access management, networking, security, monitoring, and compliance. These landing zones provide shared services that are consumed by multiple application workloads.
 

Key Features of Platform Landing Zones

Below are some key features of Platform Landing Zones:

  • Identity and Access Management: Platform Landing Zones set up centralized identity and access control mechanisms using Microsoft Entra ID (formerly known as Azure Active Directory or AAD), to manage user identities and permissions effectively.
  • Networking: They establish virtual networks, subnets, and network security groups to ensure secure communication and connectivity between various resources.
  • Security and Compliance: Platform Landing Zones implement security best practices and policies to protect the cloud environment and ensure compliance with industry standards and regulations.
  • Governance and Cost Management: Platform Landing Zones include resource organization, tagging, and governance mechanisms to facilitate cost allocation, tracking, and optimization.
  • Shared Services: Platform Landing Zones may include shared services like Azure Policy, Azure Monitor, and Azure Log Analytics to ensure consistent management and monitoring.

Application Landing Zones

Application Landing Zones focus on the specific requirements of individual applications or application types. They are designed to host and optimize the deployment of a particular application workload in Azure.
 
The primary focus of Application Landing Zones is on the unique needs of applications. They address factors such as application architecture, performance, scalability, and availability. Each Application Landing Zone is tailored to meet the demands of a specific application or application family.
 

Key Features of Application Landing Zones

Below are some key features of Application Landing Zones:

  • Application Architecture: Application Landing Zones include resources and configurations specific to the application’s architecture, such as virtual machines, containers, or serverless functions.
  • Performance Optimization: Application Landing Zones may implement caching mechanisms, content delivery networks (CDNs), or other optimizations to enhance application performance.
  • Scalability and Availability: They leverage Azure’s auto-scaling capabilities, load balancers, and availability sets or zones to ensure the application can handle varying workloads and maintain high availability.
  • Data Storage and Management: Application Landing Zones include configurations for databases and data storage solutions, such as Azure SQL Database, Azure Cosmos DB, or Azure Blob Storage, depending on the application’s data requirements.
  • Application-Specific Security: Application Landing Zones may have customized security settings and access controls based on the application’s sensitivity and compliance requirements.

Platform vs. Application Landing Zones Summary

In summary, Platform Landing Zones focus on providing a standardized and governed foundation for the entire cloud environment, addressing infrastructure and shared services needs. They set the stage for consistent management, security, and cost optimization across the organization’s Azure resources. On the other hand, Application Landing Zones concentrate on tailoring the cloud environment to suit the specific requirements of individual applications, optimizing performance, scalability, and data management for each workload.
 

Both Platform Landing Zones and Application Landing Zones play crucial roles in a successful Azure cloud adoption strategy. Platform Landing Zones ensure the overall health and governance of the cloud environment, while Application Landing Zones cater to the unique needs of diverse application workloads, enabling efficient and optimized hosting of applications in Azure.

Conclusion

In conclusion, embracing Azure Landing Zones is a strategic move for any company preparing to migrate their applications to the Microsoft Azure Cloud. With these predefined best practices and guidelines, organizations can streamline their cloud adoption process, ensure robust security and compliance, and optimize resource utilization. The benefits of proper Azure Landing Zones implementation extend beyond the initial migration phase, providing a foundation for scalable growth and seamless management of cloud resources. As a cloud solutions architect, understanding the value of Azure Landing Zones will empower you to guide businesses towards a successful and rewarding cloud journey with Microsoft Azure. For more information regarding Azure Landing Zones you can explore the documentation on Microsoft Learn.

July 21, 2023 0 comment
0 FacebookTwitterPinterestEmail
DevelopmentServerless

Durable Functions Workflows

by Mario Mamalis July 6, 2023
written by Mario Mamalis

In this post we will examine the application of the workflow patterns covered in the Durable Functions Fundamentals part of this series. I will demonstrate a sample application, showcase how to setup the demo solution, and explain important parts of the code. I strongly suggest that you read the first post of this series before continuing here.

Solution Overview

The solution I developed for this demonstration is comprised by two separate applications. One is an Azure Durable Functions App, developed to run in Isolated Worker Mode and the other is a hosted Blazor WebAssembly App, used to visualize the workflow patterns. I used a hosted Blazor WebAssembly App because I wanted to create a SignalR Hub for real time updates to the user interface as the Functions run.

Together these applications make up the Coffee Shop Demo. Imagine a fictitious coffee shop where clients place their coffee orders at the register. After a coffee order is placed there are automated coffee machines that process the order and prepare the coffees, using the following steps: Grind, Dose, Tamp and Brew. All these operations depend on the coffee specifications. The coffee properties are 

  • Coffee Type: Espresso, Cappuccino and Latte
  • Intensity: Single, Double, Triple and Quad
  • Sweetness: None, Low, Medium and Sweet
Depending on the specs of each coffee, the automated coffee machine will execute the necessary steps and report the progress to the web application utilizing SignalR web sockets. Through this process we will be able to see how using the different workflow patterns affect the behavior and performance of the coffee machines.
 
Isolated Worker Mode

I would like to provide some insights about the decision to develop and run the Functions using the .NET isolated worker. The isolated worker enables us to run Functions apps using the latest .NET version. The other alternative we have, the Azure Functions In-Process mode, supports only the same .NET version as the Functions runtime. This means that only LTS versions of .NET are supported. At the time this solution was created .NET 7 was the latest version available but the Functions runtime was supporting .NET 6. 

Some of the benefits of using the isolated worker process are:
  • Fewer conflicts between the assemblies of the Functions runtime and the application runtime.
  • Complete control over the startup of the app, the configurations and the middleware.
  • The ability to use dependency injection and middleware.
Workflow Pattern Demonstrations
In the following short videos I will demonstrate the workflow patterns by running the applications and visualizing the differences.
Function Chaining
First we will take a look at the Function Chaining pattern. Using this pattern we will simulate a coffee order with 5 coffees of different type, intensity and sweetness. The Function Chaining Orchestrator will call four Activity Functions corresponding to the coffee making steps (Grind, Dose, Tamp and Brew), in sequential fashion one after the other for each coffee. In order to simulate the order entry I make an HTTP call to a Function Starter function which in turn calls the Orchestrator.

Fan out/fan in

Now we will take a look at the Fan out/fan in pattern. Using this pattern we will simulate a coffee order with the same 5 coffees we used before. This time however, the Fan Out Fan In Orchestrator will call the Coffee Maker Orchestrator for each coffee in the order in parallel. To do this I use the Coffee Maker Orchestrator as a sub-orchestrator. Again we will utilize Postman to make an HTTP call to the appropriate Starter Function. The difference in processing speed is clear! 

Note: I did not use the Chaining Orchestrator as a sub-orchestrator because the Chaining Orchestrator is a standalone Orchestrator that accepts a coffee order. The Coffee Maker Orchestrator accepts a coffee object and executes the steps for one coffee, which is more appropriate as a sub-orchestrator.

Human Interaction

The final demonstration will be the Human Interaction pattern. With this pattern we can pause the Durable Function Orchestrator at any point we want to receive an external event. The coffee order will be placed, but the coffee machine will not execute the order until the payment is received. To demonstrate that, I initiate the order and then after I get the appropriate prompt, I make another HTTP call using a predefined event and a specific URL for this purpose. The Human Interaction Orchestrator will intercept that event and will continue processing the order by calling the Coffee Maker sub-orchestrator.

Steps to Create the Durable Functions App

I used Visual Studio 2022 to create the Durable Functions App. The following snapshots demonstrate the process of creating the solution. Just click on them to enlarge.

Create New Durable Functions App PROJECT

1. Create New Project
2. Project Name
3. Select Runtime
4. Add Durable Orchestrator
5. Select Trigger

Durable Functions App Solution Structure

After creating the solution, you will get the default files the template creates for you. Below is a snapshot of the solution in its final form after all the code was completed along with high level descriptions of the different classes.

Common Folder

Here you can find common Activity Functions, Models, SignalR classes, a Coffee Price Calculator and Enums.

Activity Functions

These are the functions that simulate the steps to prepare a coffee. Activity Functions is where you normally code your application logic. (Please view the first post in the series to see detailed information about different Function types).

Models

These are the POCO objects that represent the payloads passed in the Functions throughout the project. We have two objects. CoffeOrder and Coffee. The relationship is one-to-many where one CoffeeOrder can have many Coffees.
 

SignalR Classes

Here you will find all classes necessary to enable communication through SignalR. These classes are organized using composition and inheritance, using the appropriate SignalR client and exposing methods to send messages to the SignalR service I have provisioned on Azure. (I will not get into details about SignalR in this post as it is not the main topic).
 

Coffee Price Calculator

The CoffeePriceCalculator is a static class with static methods used to calculate the price of coffees and coffee orders.
 

Enums

The Enums class contains all enumerations used in the code such as the coffee type, sweetness and different statuses.

 

Patterns Folder

In this folder you will find all the Starters and Orchestrators, organized in subfolders for each pattern. As you can see we have Chaining, FanOutFanIn, and HumanInteraction subfolders containing the appropriate classes. 

The HumanInteraction subfolder contains 3 additional classes: An Activity Function (CalculateOrderPrice) that utilizes the Static CoffeePriceCalculator class to calculate the prices, another Activity Function (SendPaymentRequest) that simulates a delay of 2 seconds for sending a request for payment and a Model (ProcessPaymentEventModel) that is expected to be be included in the request made by the Human interacting with the orchestration. That model contains a property of PaymentStatus which is an enumeration. If the PaymentStatus passed to the event is “Approved”, then the orchestrator will continue the work and prepare the coffees.

Finally the CoffeeMaker subfolder contains the sub-orchestrator used by the FanOutFanIn and HumanInteraction Orchestrator Functions.

Important Code Sections

Now let’s go over some important code to help you understand how things come together. (The line numbers shown below are not the same as the line numbers on the actual code found on GitHub, for the obvious copy-paste reason).
 

Activity Functions

The code below is the Brew Activity Function. We can see that it has a constructor that accepts a CoffeeProcessStatusPublisher which is an implementation of the ICoffeeShopHubClient. As you can see we use Dependency Injection just like we would in any normal .NET 7 application. This is trivial at this point because we use the Isolated Worker Mode, as I explained in the beginning. This injected object is found in many other Functions and it is used to publish messages to SignalR so that we can visualize them eventually in the Blazor Web App.

You can see that the Activity Function has a name attribute. This  is very important because that is how we invoke Activity Functions from Orchestrators. We can also see that the Run method is an aync method and it has an ActivityTrigger attribute to indicate the type of Function it is. It accepts one object of type Coffee as the first parameter and then a FunctionContext as the second parameter. We can only pass a single input parameter to an Activity Function (other than the ontext). In this case we encapsulate all the properties we need in the POCO class Coffee and we are good to go. 

The FunctionContext is not used in this scenario but it could be used to call other Activity Functions if needed. If we were using the In-Process mode we would be using the Context to extract the input passed into the function since in that version parameters cannot be passed directly into the Function. The Context would be of a different type as well (IDurableActivityContext).

As mentioned above all the Activity Functions used in this project are simple and simulate the steps of the coffee making process. In a real scenario you can have more complex code. It is however recommended that you keep the function code simple and use well known design patterns to push the complexity in other class libraries.

public class Brew
{
    private readonly ICoffeeShopHubClient _coffeeProcessStatusPublisher;

    public Brew(ICoffeeShopHubClient coffeeProcessStatusPublisher)
    {
        _coffeeProcessStatusPublisher = coffeeProcessStatusPublisher;
    }

    [Function(nameof(Brew))]
    public async Task<Coffee> Run([ActivityTrigger] Coffee coffee, FunctionContext executionContext)
    {
        ILogger logger = executionContext.GetLogger(nameof(Brew));

        logger.LogInformation($"Brewing {coffee.Type} coffee  {coffee.Id} for order: {coffee.OrderId} ...");

        await Task.Delay((int)coffee.Intensity * 1000);

        coffee.Status = CoffeeStatus.Brewed;
        coffee.Message = "Brewing Completed";

        logger.LogInformation($"Brewing completed for {coffee.Type} coffee  {coffee.Id} of order: {coffee.OrderId}.");

        await _coffeeProcessStatusPublisher.UpdateCoffeeProcessStatus(coffee);

        return coffee;
    }
}

Starter Functions (Durable Clients)

The following code is a representation of a Starter Function otherwise known as a Durable Client Function. You can see that again we have a function name, a trigger (which in this case is an HttpTrigger) and two parameters: The DurableTaskClient and the FunctionContext.

This is one of the Functions I call using Postman using HTTP. The main purpose of a Durable Client Function is to kick of an Orchestrator Function. We can see that happening on lines 15 and 16. Before that, we can see that we read the request body and extract (deserialize) the CoffeOrder, which if you can remember from the videos, was passed in as a JSON object.

public static class ChainingStarter
{
    [Function(nameof(StartChainingOrchestrator))]
    public static async Task<HttpResponseData> StartChainingOrchestrator(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req,
        [DurableClient] DurableTaskClient client,
        FunctionContext executionContext)
    {
        ILogger logger = executionContext.GetLogger("StartChainingOrchestrator");

        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
        var coffeeOrder = JsonConvert.DeserializeObject<CoffeeOrder>(requestBody);

        // Function input comes from the request content.
        string instanceId = await client.ScheduleNewOrchestrationInstanceAsync(
            nameof(ChainingOrchestrator), coffeeOrder);

        logger.LogInformation("Started Coffee Order Orchestration with ID = '{instanceId}'.", instanceId);

        // Returns an HTTP 202 response with an instance management payload.
        // See https://learn.microsoft.com/azure/azure-functions/durable/durable-functions-http-api#start-orchestration
        return client.CreateCheckStatusResponse(req, instanceId);
    }
}

Orchestrator Function: ChainingOrchestrator

In the first pattern demonstrated we utilize the ChainingOrchestrator. This first piece of code shows the constructor dependency injection, the function name and the appropriate trigger used (OrchestrationTrigger).

On line 14 you can see that I use a CreateReplaySafeLogger. If you rember from the first post of this series, there are many code constraints we have to follow in Orchestrator Functions because the Orchestrator stops and re-starts frequently. The code cannot create any ambiguity. To ensure reliable execution of the orchestration state, Orchestrator Functions must contain deterministic code; meaning it must produce the same result every time it runs. 

On line 15 I use the context to get the CoffeeOrder input parameter, however I could have passed it in as a parameter in the signature as well.

The code within the Try block starts the actual orchestration logic. I wanted to point out lines 29 and 36 as samples of how we call Activity Functions. You can see that we use the name of the function and we pass the input as a parameter.

public class ChainingOrchestrator
{
    private readonly ICoffeeShopHubClient _coffeeProcessStatusPublisher;

    public ChainingOrchestrator(ICoffeeShopHubClient coffeeProcessStatusPublisher)
    {
        _coffeeProcessStatusPublisher = coffeeProcessStatusPublisher;
    }

    [Function(nameof(ChainingOrchestrator))]
    public static async Task<CoffeeOrder> RunOrchestrator(
        [OrchestrationTrigger] TaskOrchestrationContext context)
    {
        ILogger logger = context.CreateReplaySafeLogger(nameof(ChainingOrchestrator));
        var coffeeOrder = context.GetInput<CoffeeOrder>();

        try
        {
            if (coffeeOrder == null)
            {
                coffeeOrder = new CoffeeOrder
                {
                    Status = OrderStatus.Failed,
                    Message = "Coffee order not specified."
                };

                logger.LogInformation(coffeeOrder.Message);

                await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

                return coffeeOrder;
            }

            coffeeOrder.Status = OrderStatus.Started;
            coffeeOrder.Message = $"Started processing coffee order {coffeeOrder.Id}.";
            await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

In the following code below (still in the same file), you can see how I chain the function calls. I iterate the coffee collection of the coffee order and invoke each Activity Function (coffee making step).

foreach (var coffee in coffeeOrder.Coffees)
{
    coffee.Message = $"Started making {coffee.Type} coffee {coffee.Id} for order {coffee.OrderId}.";
    await context.CallActivityAsync("UpdateCoffeeMakerStatus", coffee);

    var processedCoffee = await context.CallActivityAsync<Coffee>(nameof(Grind), coffee);
    processedCoffee = await context.CallActivityAsync<Coffee>(nameof(Dose), processedCoffee);
    processedCoffee = await context.CallActivityAsync<Coffee>(nameof(Tamp), processedCoffee);
    processedCoffee = await context.CallActivityAsync<Coffee>(nameof(Brew), processedCoffee);

    if (processedCoffee.Status == CoffeeStatus.Brewed)
    {
        processedCoffee.Message = $"{processedCoffee.Type} coffee {processedCoffee.Id} " +
            $"for order {processedCoffee.OrderId} is ready.";

        await context.CallActivityAsync("UpdateCoffeeMakerStatus", processedCoffee);
    }
}

Orchestrator Function: FanOutFanInOrchestrator

The following code is the most important part of the Fan Out Fan In pattern. First I create a List of Tasks of type Coffee (line 1). Then I iterate the coffee order and assign a Task to the return type of the sub-orchestrator function (CoffeeMakerOrchestrator). Each task returns a Coffee object that contains the status. Then on line 13 I await all Tasks to complete and when that is done the orchestrator finishes execution (a few steps below, not shown here).
 
This parallel execution of the sub-orchestrator makes the processing simulation prepare all coffees together. As you can see in the visualization in the video, the execution is a lot faster. The code of the sub-orchestrator is very similar to the Chaining Orchestrator code I showed above. The only difference is that the CoffeeMakerOrchestrator (used as a sub-orchestrator here), accepts a coffee instead of a coffee order. In this way I can do the parallel execution calls in the parent Orchestrator where they actually belong.

 

var coffeeTasks = new List<Task<Coffee>>();

coffeeOrder.Status = OrderStatus.Started;
coffeeOrder.Message = $"Started processing coffee order {coffeeOrder.Id}.";
await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

foreach (var coffee in coffeeOrder.Coffees)
{
    var task = context.CallSubOrchestratorAsync<Coffee>(nameof(CoffeeMakerOrchestrator), coffee);
    coffeeTasks.Add(task);
}

var coffeeMessages = await Task.WhenAll(coffeeTasks);

coffeeOrder.Status = OrderStatus.Completed;
coffeeOrder.Message = $"Coffee order {coffeeOrder.Id} is ready.";
await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

Orchestrator Function: HumanInteractionOrchestrator

This code below shows several interesting points of this pattern and the capability of the Durable Functions to wait for external events.

In line 1 you can see the call to the Activity Function that calculates the price. Immediately after the that I send a payment request and update the Coffee Order Status appropriately. On line 9 we see the code that sets the Orchestrator to sleep, waiting for an external event with name “ProcessPayment”, to trigger the rest of the code to execute. Once the event is received I examine the code, and only if the Payment Status is Approved, I proceed with the coffee making process.

After this point the code is identical to the Fan Out Fan in Pattern.

coffeeOrder = await context.CallActivityAsync<CoffeeOrder>("CalculateOrderPrice", coffeeOrder);

await context.CallActivityAsync("SendPaymentRequest", coffeeOrder);

coffeeOrder.Status = OrderStatus.Placed;
coffeeOrder.Message = $"Coffee order {coffeeOrder.Id} was placed. Total Cost: {coffeeOrder.TotalCost:C}. Waiting for payment confirmation...";
await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

var response = await context.WaitForExternalEvent<ProcessPaymentEventModel>("ProcessPayment");

if (response.PaymentStatus == PaymentStatus.Approved)
{
    coffeeOrder.Status = OrderStatus.Placed;
    coffeeOrder.Message = $"Payment received for coffee order {coffeeOrder.Id}";
    await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

    var coffeeTasks = new List<Task<Coffee>>();

    coffeeOrder.Status = OrderStatus.Started;
    coffeeOrder.Message = $"Started processing coffee order {coffeeOrder.Id}.";
    await context.CallActivityAsync("UpdateCoffeeOrderStatus", coffeeOrder);

    foreach (var coffee in coffeeOrder.Coffees)
    {
        var task = context.CallSubOrchestratorAsync<Coffee>(nameof(CoffeeMakerOrchestrator), coffee);
        coffeeTasks.Add(task);
    }

If you can remember from the corresponding video, when I called the Human Interaction HTTP endpoint with a POST, I received the response shown below. Line 4 in the response has the URL for posting events to this particular instance of the Orchestrator. If you scroll to the right you can see the {eventName} route parameter. I typed the name of the event there (ProcessPayment) and then made a POST call to the endpoint. At that point, this particular Orchestrator instance received the event and continued the code execution. This is pretty powerful!

{
    "id": "c7ccbd737d194f7cbf66995b8bcc3e03",
    "purgeHistoryDeleteUri": "https://func-durable-functions-demo.azurewebsites.net/runtime/webhooks/durabletask/instances/c7ccbd737d194f7cbf66995b8bcc3e03?code=7WrfquVTHSs9yXTv9Rk0-NacLhqEHrV2mozAHL-jYGTcAzFuA7_Erg==",
    "sendEventPostUri": "https://func-durable-functions-demo.azurewebsites.net/runtime/webhooks/durabletask/instances/c7ccbd737d194f7cbf66995b8bcc3e03/raiseEvent/{eventName}?code=7WrfquVTHSs9yXTv9Rk0-NacLhqEHrV2mozAHL-jYGTcAzFuA7_Erg==",
    "statusQueryGetUri": "https://func-durable-functions-demo.azurewebsites.net/runtime/webhooks/durabletask/instances/c7ccbd737d194f7cbf66995b8bcc3e03?code=7WrfquVTHSs9yXTv9Rk0-NacLhqEHrV2mozAHL-jYGTcAzFuA7_Erg==",
    "terminatePostUri": "https://func-durable-functions-demo.azurewebsites.net/runtime/webhooks/durabletask/instances/c7ccbd737d194f7cbf66995b8bcc3e03/terminate?reason={{text}}}&code=7WrfquVTHSs9yXTv9Rk0-NacLhqEHrV2mozAHL-jYGTcAzFuA7_Erg=="
}

Azure Resources

Both the Web Application and the Durable Functions Application shown during the demo have been running on resources provisioned on Azure. below you can see the resources used for both applications.

Conclusion

In this post I presented three important workflow patterns we can implement using Durable Functions. I hope this triggered your interest to investigate Durable Functions further. With Durable Functions we can develop complex workflows without the need of a messaging bus. Of course such a decision has pros and cons, but having all the Orchestration code organized in one place and not having to rely on other technologies is a pretty powerful argument. All the code I developed for this post can be found here: CoffeeShopDemo and DurableFunctionsDemo.
July 6, 2023 0 comment
0 FacebookTwitterPinterestEmail
DevelopmentServerless

Durable Functions Fundamentals

by Mario Mamalis December 5, 2022
written by Mario Mamalis

Durable Functions is an extension of Azure Functions that provides additional functionality for programming scenarios requiring stateful workflows. With Durable Functions we can simplify the implementation of workflow related programming patterns that would otherwise require more complex setups.

Serverless Computing

Let’s take a few steps back and talk about Serverless computing on Azure first. Serverless computing, is one of the compute options we have available on Azure. It enables developers to go from Code to Cloud faster by removing dependencies on any type of infrastructure. Instead of worrying where and how the code runs, developers only have to focus on coding and adding business value to the applications they are developing.

Of course this does not work by magic. The code still has to run on servers, however Azure is taking care of provisioning, scaling and managing the hardware and the operating systems behind the scenes. The serverless notion is from the point of view of the developers. This is a very powerful and exciting capability to have! Azure provides several serverless compute offerings such as: Azure Functions, Logic Apps, Serverless Kubernetes (AKS) and most recently Azure Container Apps. I will talk more about those in different posts.

Azure Functions Overview

As I mentioned in the beginning, Durable Functions is an extension of Azure Functions. I am assuming that most developers already have some idea about what Azure Functions are and I will not be covering the basics in detail. In case you want to learn more about Azure Functions, there is plenty of information on the internet. You can start here.

As a brief overview, with Azure Functions you can deploy blocks of code on a Functions as a Service (FaaS) environment. Each function can be executed using different triggers such as a timer, a queue message, an http request and others. The code can be written in several supported languages such as C#, F#, JavaScript, Python and more. One of the most important characteristics of Azure Functions is that that they offer Bindings for easy integration with other services such as Blob Storage, Cosmos DB, Service Bus, Event Grid and many others.

Introducing Durable Functions

As powerful and convenient the basic capabilities of Azure Functions are, they do leave some room for improvement. That’s where the Durable Functions extension comes into play.

Many times we have the need to execute Azure Functions in relation to the execution of previously executed ones, as part of a larger workflow. Sometimes we want to pass the output of one function as an input to the next one, make decisions on what functions to call next, handle errors and possibly roll back the work of previously executed functions etc. You might think OK, we can already handle such scenarios by adding a message broker such as Service Bus into our architecture. That is true, however Durable Functions provide a simpler and more powerful alternative.

With Durable Functions we can orchestrate the execution of functions in a workflow defined in code. This way we never have to leave the orchestration context and we can simplify our architecture. It is also easy to have a complete picture of the workflow just by looking at the code in the orchestrator.

Durable Function Types

The Durable Functions extension enables the use of special function types. Each function type has a unique purpose within a workflow.

Activity Functions

Activity Functions are the components that execute the main code of the workflow. This is where the main tasks execute. There are no restrictions on the type of code we write in Activity Functions. We can access databases, interact with external services, execute CPU intensive code etc. If you think of a workflow, all the code that executes specific tasks should be in Activity Functions. The Durable Task Framework guarantees the execution of an Activity Function at least once within the context of an orchestration instance.

Activity Functions are configured using an Activity Trigger through the use of an Activity Trigger Attribute. By default Activity Functions accept an IDurableActivityContext parameter as input, but we can also pass primitive types and custom objects as long as they are JSON-serializable. One caveat here is that we can only pass a single value in the signature. If you want to pass multiple values, then certainly use custom objects.

The fact that Activity Functions contain most of the action does not mean that all the code to accomplish a task should be in the function itself. I encourage you to develop your projects with any architectural style you are comfortable with, and separate your code into different layers and classes, as you would do for an API project for example. My favorite style of code organization is Clean Architecture, and I have developed very complex Durable Function based services using the full potential of that. Treat your Activity Functions as you treat API Controllers in an API project. Keep the code to a minimum in the Activity Function itself and invoke code in different layers.

Entity Functions

Entity Functions, also known as Durable Entities are a special type of Durable Function, available in version 2.0 and above. They are meant for maintaining small pieces of state and making that state available for reading and updating. They are very similar to virtual actors inspired by the actor model. Entity Functions are particularly useful in scenarios requiring keeping track of small pieces of information for thousands or hundreds of thousands of objects; for example the score of players in a computer game or the status of an IoT device. They provide a means to scale out applications by distributing state and work across many entities.

Entity Functions are defined with a special trigger, they Entity Trigger. They are accessed through a unique identifier, the Entity ID. The Entity ID consists of the Entity Name and the Entity Key. Both are strings. The name should match the Entity Function name and the key must be unique among all other entity instances that have the same name, therefore it is safest to use a GUID. My recommendation is to use Entity Functions like we use classes and methods. We can define properties that hold the state and methods that perform operations on that state. For example we can have an Entity Function called “BankAccount” with a property “Balance” and methods “Deposit”, “Withdraw”, and “GetBalance”.

Entity Functions can be called using one-way communication, as in call the function and not wait for a response, or two-way communication when we want to get a response. Entities can be called from Orchestrator Functions, Client Functions, Activity Functions or other Entity Functions, but not all forms of communications are supported by all contexts. From within clients we can signal (call one-way) an entity and we can read the entity state. From within orchestrators we can signal (one-way) and call entities (two-way). From other entities we can only signal (one-way) other entities.

Client Functions

Client Functions are the functions that call/trigger Orchestrator Functions or Entity Functions. Orchestrator and Entity Functions cannot be triggered directly. They must instead receive a message by a Client Function. Client Functions in other words are starter functions. Any non-orchestrator function can be a Client Function as long as it uses a Durable Client output binding. The code below shows an example of a Client Function that is the starter of an Orchestrator Function.

public static class CoffeeMakerStarter
{
    [FunctionName("CoffeeMakerStarter")]
    public static async Task<IActionResult> HttpStart(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequest req,
        [DurableClient] IDurableOrchestrationClient starter,
        ILogger log)
    {
        string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
        var coffeeOrder = JsonConvert.DeserializeObject<CoffeeOrder>(requestBody);  

        // Function input comes from the request content.
        string instanceId = await starter.StartNewAsync<object>("CoffeeMakerOrchestrator", coffeeOrder);

        log.LogInformation($"Started orchestration with ID = '{instanceId}'.");

        return starter.CreateCheckStatusResponse(req, instanceId);
    }
}

Orchestrator Functions

Orchestrator Functions orchestrate the execution sequence of other Durable Function types using procedural code within a workflow. They can contain conditional logic and error handling code, they can call other functions synchronously and asynchronously, they can take the output of one function and pass it in as input in subsequent functions and they can even initiate sub-orchestrations. In general they contain the orchestration code of a workflow. With Orchestrator Functions we can implement complex patterns such as Function Chaining, Fan Out/Fan In, Async HTTP APIs, Monitor, Human Interaction and Aggregator. We will explore the theory behind the most important of these patterns later on in this post, and sample code will be provided in subsequent posts.

Orchestrator Code Constraints

Code in Orchestrator Functions must comply to certain constraints. Failure to honor these constraints will result in unpredictable behavior. Before we list what those constraints are, it is important to understand why this is the case.

Under the covers, Orchestrators utilize the Durable Task Framework, which enables long running persistence in workflows using async/await constructs. In order to maintain persistence and be able to safely replay the code and continue from the appropriate point in the workflow, the Durable Functions extension uses Azure Storage Queues to trigger the next Activity Function in the workflow. It also uses Storage Tables to save the state of the orchestrations in progress. The entire orchestration state is stored using Event Sourcing. With Event Sourcing, rather than storing only the current state, the whole execution history of actions that result in the current state is stored. This pattern enables the Safe Replay of the orchestration’s code. Safe replay means that code that was executed before within the context of a particular orchestration instance, will not be executed again. For more clarity consider the following diagram:

This diagram shows a typical workflow involving an Orchestrator and two Activity Functions. As you can see, the Orchestrator sleeps and wakes up several times during the workflow. Every time it wakes up, it replays (re-executes) the entire code from the start to rebuild the local state. While the Orchestrator is doing that, the Durable Task Framework, examines the execution history stored in the Azure Storage Tables and if the code encounters an Activity Function that has already been executed, it replays that function’s result and the Orchestrator continues to run until the code is finished or until a new a new activity needs to be triggered.

To ensure reliable execution of the orchestration state, Orchestrator Functions must contain deterministic code; meaning it must produce the same result every time it runs. This imposes certain code constraints such as:

  • Do not generate random numbers or GUIDs
  • Do not ask for current dates and times. If you have to, then use IDurableOrchestrationContext.CurrentUtcDateTime
  • Do not access data stores such as databases
  • Do not look for configuration settings
  • Do not write blocking code such as I/O code or Thread related code
  • Do not perform any async operations such as Task.Run, HttpClient.SendAsync etc
  • Do not use any bindings including orchestration client and entity client
  • Avoid using static variables
  • Do not use environment variables

This is not a comprehensive list but you can already get the idea that code in Orchestration Functions should be limited only to workflow orchestrations code. All these things you cannot do in Orchestrators you can and should be doing in Activity or Entity Functions that are invoked by the Orchestrators. For a full list and a lot more details about these constraints you can go to the Microsoft Learn site here.

Workflow Patterns

Now that we have learned the basic building blocks of Durable Functions we can begin exploring the different patterns that become available for us to implement. As a reminder, in this post we will only be covering the theory behind some of these patterns. Actual sample implementations will follow in subsequent posts.

Pattern 1: Function Chaining

The first pattern we can implement using Durable Functions is the Function Chaining pattern. Imagine any scenario of a workflow that requires multiple sequential steps to happen to accomplish a goal. Each step depends on the previous step to be completed before it can be executed, and steps may or may not require as input the output of a previous step.

In the following diagram we can see a Function Chaining pattern of a fictitious automatic espresso coffee maker machine. The pattern utilizes a Client Function (starter) that triggers an Orchestrator Function and the Orchestrator executes 4 Activity Functions in sequence.

The power that comes with Durable Functions and specifically the orchestration, is that  we can now introduce complex logic and error handling to gracefully handle errors in any step of the workflow or execute different tasks based on certain activity outputs. This would be much harder to do if we used regular functions that called one another. In addition to that we also have the benefits of the Durable Task Framework, that allows the workflow to go to sleep while an activity executes and then wake up and execute the appropriate next step.

Pattern 2: Fan out/fan in

Another powerful pattern we can implement is the Fan out/fan in pattern. This pattern fits well in scenarios when we want to execute multiple similar tasks in parallel, and then wait for them all to complete before we execute the next task. For example let’s consider now that the automated coffee maker machine can prepare multiple coffees. As the owner of the coffee shop, when I get an order from a group of 3 people, I want to prepare the 3 coffees at the same time and serve them together. In Function terms, what we can do is wrap the coffee workflow in one orchestration and then kick off 3 instances of that orchestration from a parent orchestrator, making it in a sense a sub-orchestration. This is depicted in the following diagram.

Pattern 3: Human Interaction

So far we have seen patterns that provide fully automated solutions. But how about workflows that require human intervention? Well there is a way to solve those types of scenarios as well. The Durable Functions Orchestrator context exposes a method called WaitForExternalEvent(). This method also accepts a TimeSpan parameter so that we can set a timer specifying how long we would wait for an external event to happen before continuing the workflow. This is pretty powerful, and I should point out here that no extra charges incur if the function is running in a consumption plan while waiting.

So lets assume that we are building a workflow that handles an on-boarding process for a new employee. As we can see in the diagram below, the Orchestrator first gets the offer letter from storage and then sends an email to the candidate. At this point the Orchestrator goes to sleep and waits for human interaction, which in this case happens when the candidate clicks on the acceptance link inside the email. This link invokes a regular HTTP Trigger Function and this new function in turn invokes the Orchestrator by raising the RaiseEventAsync() event which is part of the IDurableOrchestrationClient interface.

Wrapping Up

As we have seen, Azure Durable Functions provide a lot of useful functionality that allows us to create powerful workflows. Serverless Computing is a revolutionary platform that we can use to building software on. It allows us to focus on coding and completely forget about the nuances of infrastructure and operating systems. We can create services that scale in and out automatically to meet demand, handle complex workflows, pay only for the compute power we use (in a consumption plan) and save time and effort because we do not have to maintain any of the infrastructure.

We have focused on the fundamental theory of the most important aspects of Durable Functions in this first post of the series. In subsequent posts we will dive deeper into the setup, development and deployment of Durable Functions. Stay tuned!

December 5, 2022 0 comment
0 FacebookTwitterPinterestEmail

Software Architect

Building software applications from code to cloud the right way is my passion. Sharing knowledge with my clients and peers is my joy.

Recent Posts

  • Navigating the AI Landscape

    August 10, 2023
  • Case Study: Transformative Digital Solutions with Xebia|Xpirit

    July 24, 2023
  • Azure Landing Zones

    July 21, 2023
  • Durable Functions Workflows

    July 6, 2023
  • Durable Functions Fundamentals

    December 5, 2022

Categories

  • Development (5)
    • Artificial Intelligence (1)
    • Cloud Architecture (1)
    • Serverless (2)
    • Success Stories (1)
  • Development Videos (5)
    • Microservices (4)

@2022 - All Right Reserved.


Back To Top
Mario Mamalis
  • About Me
  • Development Posts
  • Development Videos
 

Loading Comments...