Wind energy plays a key role in global decarbonization efforts by producing emissions-free electricity from an abundant resource. In 2022, wind energy produced 2,100 terawatt hours (TWh) worldwide, accounting for more than 7% of global electricity, and is expected to reach 7,400 TWh by 2030.
Despite that potential, several challenges must be addressed to achieve grid decarbonization goals. As wind energy deployment increases, issues such as gearbox fatigue and cutting-edge erosion must be resolved to ensure a predictable energy supply. For example, in the United States, wind turbines are expected to operate at full capacity for 25 years, but their performance degrades by 10% after 11 years of operation.
This blog revealed that digital twin The architecture uses National Renewable Energy Laboratory (NREL) OpenFAST, an open-source multiphysics wind turbine simulation tool, to characterize operational anomalies and continuously improve wind farm performance. This approach can be used to support an overall maintenance strategy that optimizes performance and profitability while reducing risk.
Digital twins come in many forms, but this architecture represents a digital twin with a physical wind turbine connected to the cloud using IoT devices, and uses on-demand simulation to improve performance and enhanced knowledge. monitor. Insights gained from simulation allow you to update your physical asset management system in near real-time to balance operational performance.
Why build this?
This digital twin can capture discrepancies in reliability ratings by benchmarking real-world time series with simulations. Aeroelastic simulators such as OpenFAST define operating ranges as part of wind turbine design and certification according to IEC 61400-1 and 61400-3. However, subtle and unanticipated changes in environmental conditions that were not considered in the initial design certification, such as increased turbulence intensity, accelerate degradation.
Using the same simulation software for wind turbine design, this architecture allows you to verify whether controller changes can limit incremental performance damage before deploying them. This example scenario is one that operators are currently struggling with and is described in the next section.
digital twin architecture
Figure 1 shows an event-driven architecture where resources launch on-demand simulations when an anomaly occurs.
Simulation and real-world results can be fed into the calculation engine to update wind turbine controller software and improve operational performance through the following workflows:
- The wind turbine’s sensors are connected to the AWS Cloud using AWS IoT Core.
- IoT rules forward sensor data to Amazon Timestream, a dedicated time series database.
- A scheduled AWS Lambda function queries Timestream to detect anomalies in time series data.
- When an anomaly is detected, Amazon Simple Notice Service (Amazon SNS) issues a notification and an OpenFAST simulation input file is prepared with the Lambda preprocessor.
- Simulations run on-demand and retrieve the latest OpenFAST simulations from Amazon Elastic Container Registry (Amazon ECR).
- Simulations are dispatched through a RESTful API and run using AWS Fargate.
- Simulation results are uploaded to Amazon Simple Storage Service (Amazon S3).
- The simulated time series data is processed using AWS Lambda to decide whether to update the controller software based on anomalies.
- The Lambda postprocessor initiates updates to the wind turbine controller software, and the updates are propagated to the wind turbine through AWS IoT Core.
- Results are visualized in Amazon Managed Grafana.
An example of an anomaly in Step 3 is a controller overspeed alarm. Detect exceeded thresholds using simple rules-based anomaly detection. You can also incorporate more advanced forms of anomaly detection using machine learning through Amazon SageMaker. In the above workflow, four elements are stored to create a digital twin. The next section describes these four elements.
event-driven architecture
Event-driven architecture enables asynchronous communication between isolated systems and services. Event-driven workflows start automatically when an event occurs. An event can be an active alarm or an OpenFAST output file uploaded to Amazon S3. This means you can scale the number of actively monitored wind turbines from 1 to 100 (or more) without allocating new resources.
AWS Lambda provides instant scaling to increase the number of OpenFAST simulations available for processing. Additionally, Fargate eliminates the need to provision or manage underlying OpenFAST compute instances. Leveraging serverless computing services eliminates the need to manage the underlying infrastructure, provides demand-based scaling, and reduces costs compared to statically provisioned infrastructure.
In practice, an event-driven architecture gives teams the flexibility to automatically prepare input files, dispatch simulations, and post-process results without manually provisioning resources.
Containerization
Containerization is the process of deploying an application with the libraries it needs to run. Docker creates container images that bundle OpenFAST executables. As shown in Figure 2, FastAPI is also included in the OpenFAST container so that simulations can be dispatched through the Web RESTful API. Please note that OpenFAST and FastAPI are independent projects. The following commands are provisioned in the RESTful API for OpenFAST:
- Initial conditions (
PUT: /execute
) - Upload simulation results to Amazon S3 (
POST: /upload_to_s3
) - Provides simulation status (
GET: /status
) - Delete simulation results (
DELETE: /simulation
)
This setup allows engineering teams to pull an OpenFAST simulation version aligned to the physical wind turbine in operation without manual configuration.
Load balancing and autoscaling
This architecture uses Amazon EC2 Auto Scaling and ALB to manage fluctuating processing demands and enable concurrent OpenFAST simulations. EC2 Auto Scaling dynamically scales the number of OpenFAST containers based on the volume of simulation requests, reducing costs to avoid idle resources. Combined with ALB, this setup evenly distributes simulation requests across OpenFAST containers, ensuring desired performance levels and high availability.
data visualization
Amazon Timestream collects and archives real-time metrics from physical wind turbines. Timestream can store any metrics from physical assets collected through IoT Core, such as rotor speed, generator power, generator speed, generator torque, and wind turbine control system alarms, as shown in Figure 3. One of Timestream’s distinctive features is scheduled queries that can be run on a regular basis. Perform automated tasks such as measuring 10-minute average wind speed and tracking the unit with controller alarms.
This allows operations teams to view detailed insights in real-time or query historical data using SQL. Amazon Managed Grafana is also connected to OpenFAST results stored in Amazon S3 to compare simulation results to real-world operational data and view the response of simulated components. Engineering teams benefit from Amazon Managed Grafana because it provides a window into how the simulation responds to controller changes. Engineers can check whether the physical machine responds as expected.
conclusion
The AWS Cloud provides services and infrastructure that provide organizational resources to process data and build digital twins. Organizations can leverage open source models to improve operational performance and leverage physically-based simulation to improve accuracy. By integrating technology paradigms such as event drive architectures, wind turbine operators can make data-driven decisions in real time. Organizations can create virtual replicas of physical wind turbines to diagnose the cause of alarms and adopt strategies to limit excessive wear before permanent damage occurs.