Architecture Recovery of UVic

We recently compiled a summary on how we analyze earth system models (ESMs) based on runtime data. We also do this based on code analysis, but both approaches have their merit, i.e., dynamic architecture recovery — aka using runtime observations — allow to see which parts of a model program are actually used, how much they are used and how much time has been consumed by the respective function or operation.

Dynamic recovery process

Step 1: Understand the Model’s Build Process

Before we can perform any analysis, we need to understand the build process of a scientific model and how to instrument it with Kieker. This often requires to consult the model developers.

Step 2: Configure Model and Setup Parameters

It is of great importance to develop a model setup that ensures that all required parts of the model are executed, but also does not take an excessive amount of time to execute.
This is important for two reasons:

  • Monitoring will introduce overhead, and on top of that some code optimizations must be turned off. Otherwise, probes would be removed from the code.
  • The longer the run, the more monitoring data is generated. The log can become quite extensive and hard to process. To ensure, we got a good example, we compile and run the chosen setup. This ensures that the code compiles and all the necessary setup and forcing data is in place.

Step 3: Instrument Scientific Model

We use the ability of the GNU Compiler Collection (GCC) and the Intel Fortran compiler (ifort, version 19.0.4 and 2021.1.2) to weave in instrumentation probes (command line option -finstrument-functions) into a program. Kieker4C provides specific functions for that.

Both compiler suites are capable to instrument all functions, procedures and subroutines (we refer to these as operations) in Fortran, C and other compatible languages.
It is possible to select only a subset of these operations.

Besides activating instrumentation by the compiler, we have also included the Kieker monitoring library in the build path. This library provides an implementation of the two probes with the following signatures:

The compiler will then weave in calls to two probes that are called at the beginning and end of an operation, respectively.
While -finstrument-functions causes all operations to be instrumented, this can be controlled by additional flags, such as -finstrument-functions-exclude-function-list and -finstrument-functions-exclude-file-list, which exclude operations and files.

Besides activating the instrumentation feature, we have to provide an implementation to two probes, these are:

void __cyg_profile_func_enter(void *this_fn, void *call_site);
void __cyg_profile_func_exit(void *this_fn, void *call_site);

The Kieker4C-library implements both probe functions and produces with Kieker the minimal set of trace events, i.e., BeforeOperationEvent, AfterOperationEvent, and TraceMetadata.

In Java, we can obtain method and class names at runtime. This is not possible in compiled Fortran code. Instead, the compiler can append debug symbols to the program, which are then used to resolve name during analysis. Our analysis tools automatically call this program to extract the necessary information to resolve the names in the Kieker events.

The names in Fortran are case-insensitive, but the symbols in object code are case-sensitive. Thus, compilers convert names to lowercase and prefix them with _. During recovery, we remove these, otherwise it deviates from the source code.

Details on how to introduce compiler flags and the library into the respective models can be found in our replication package.

Step 4: Model Execution

When the scientific model is set up, we execute it to collect monitoring data. Depending on the model and setup, this can take minutes or hours. For the actual collection of the monitoring data, we use the Kieker collector to receive all monitoring data, compress it and store it. The collector can be started on a different machine and produce Kieker logs, including splitting up logs to avoid file size issues.

In case the collector is too slow to process all events, as it instantiates new objects for every event and facilitate event modifications, we can use NetCAT as a server which is available for various platforms. Together with split, it is possible to create a setup that allows to store huge monitoring logs. On Linux and similar operating systems, it can be run with nc -l 5678 | split -b 1000000000 - data- where 5678 is the port number the probes are writing to. These dumps can then only be read by the TCP reader stage of Kieker when replaying the log.

To execute and monitor the scientific model, we first start the collector or NetCAT and then start the instrumented scientific model.

Step 5: Monitoring Data Analytics

After the model run, we analyze the collected monitoring data. Depending on how the monitoring data was collected (see above), we use the file reader or TCP reader stage with our analysis tools. The architecture reconstruction only relies on operation calls and can be created from the log data with a minimal memory footprint. Our tools utilize the Kieker Architecture Model, but other architecture models can be used too.

The analysis produces a basic architecture model, based on observations and debug symbols. To improve the results, it is possible to generate a map file that lists all the functions found in the monitoring data, the file in which they are defined, and add a column to identify an additional grouping. The directory structure of the source code is an example for such a grouping. Our tooling provides options to generate such mapping files automatically, which can then be tweaked to satisfy the engineer.

Besides a dynamic analysis with Kieker, we also perform static code analysis and merge the recovered static architecture. All elements from these architectures are tagged to indicate their origin. This allows to identify whether an operation or component exists in the static or dynamic recovered architecture. It is also possible to join multiple dynamic analyses to identify shared components. This is helpful when analyzing variants and versions.

UVic model (v2.9.2) architecture with two levels of components

Step 6: Recover Interfaces

While newer Fortran dialects support interfaces comparable to interfaces of modules and units in Pascal and Modula-2, respectively, older versions do not have any interface information. Therefore, we aim to recover interfaces based on the calls between two components. There are different strategies available, for example, all calls from one component to one other component are grouped into one interface. This will produce very large interfaces and is not helpful for program comprehension. Therefore, we collect for each provided operation all its callee and caller components. Then, operations with an identical set of caller components are put into one provided interface of the callee component. This will create too many interfaces, as not every component will use all operations provided by another component. However, it provides a good starting place for semi-automated refinement.

Step 7: Inspect the Recovered Architecture

There are different tools available to visualize and measure the recovered architecture.
First, the Kieker development tools include two views that allow to view the architecture in Eclipse utilizing KLighD. One view only addresses the composition of the assembly model without links based on calls, the other one includes call information. Both visualizations allow to inspect the recovered model interactively.

Second, the mvis command line tool allows to visualize, inspect and measure recovered architectures. It can color the model based on the data source of a recovery, which is helpful when mixing different recovered architectures from dynamic and static recovery.
For example, to identify components and operations present in both architectures, shared elements can be colored differently. In addition, mvis is able to compute different metrics regarding the architecture.

Resources

Make Model Output Comparable

Type Bachelor or Master

Task A key issue in the ocean and climate modelling community is to compare different model output from different models, as they are in different formats, notations and units. The goal of this thesis is to create a common format or a way to specify a format together with transformations that convert output to a common output.

There exists previous work in this topic:

  • CMIP
  • EMSValTool https://www.esmvaltool.org/
  • CMOR https://cmor.llnl.gov/

Dataflow Analysis of Climate Models

Type Bachelor or Master

Task: Static analysis of Fortran code with FParser to extract data flow and create a data flow model from it. FParser is written in Python. Most of the analysis tools are written in Java. Thus, FParser will be used to identify read and write accesses to data and return a list of such accesses which can then be used to enrich our existing dataflow model.

Identify and analyze coding techniques for mathematical methods in Fortran

Type Bachelor (one project, one analysis)

Task: Analyze a scientific modeling software written in Fortran for
coding techniques that implement mathematical methods, and identify
invariants and create testable assertions in Python.

Sources & Notes

Starting point: Existing Fortran software

  • UVic http://terra.seos.uvic.ca/model/
  • MITgcm https://github.com/MITgcm/MITgcm

Thematic Domain Analysis for Ocean Modeling

We just published our paper on the analysis of the ocean modeling domain. It provides answers on the characteristics of the domain, how scientists develop and research models, how they implement them, and how technologies and methods are applied to this endeavor. Based on them, software engineers can better apply their tools, methods and approaches to the scientific modeling domain to support the software side of the model development which suffers from a lack of engineering insight.

The paper is available as a pre-print on https://arxiv.org/abs/2202.00747 and the final version is available via 10.1016/j.envsoft.2022.105323

Software Development Processes in Ocean System Modeling

Scientific modeling provides mathematical abstractions of real-world systems and builds software as implementations of these mathematical abstractions. Ocean science is a multidisciplinary discipline developing scientific models and simulations as ocean system models that are an essential research asset.
In software engineering and information systems research, modeling is also an essential activity. In particular, business process modeling for business process management and systems engineering is the activity of representing processes of an enterprise, so that the current process may be analyzed, improved, and automated.
In this paper, we employ process modeling for analyzing scientific software development in ocean science to advance the state in engineering of ocean system models and to better understand how ocean system models are developed and maintained in ocean science. We interviewed domain experts in semi-structured interviews, analyzed the results via thematic analysis, and modeled the results via the business process modeling notation BPMN.
The processes modeled as a result describe an aspired state of software development in the domain, which are often not (yet) implemented. This enables existing processes in simulation-based system engineering to be improved with the help of these process models.

The paper can be found at https://arxiv.org/abs/2108.08589

Thematic Map of the Domain Analysis

One of our initial initiatives were to understand the domain of ocean modeling, its processes and characteristics. Therefore, we conducted a set of interviews with domain experts, i.e., scientists, research software engineers, and technicians. To analyze the interview data, we relied on an Thematic Analysis approach. The resulting map can be found here. To open or close a theme (yellow) or category (blue) click on the respective node.

Develop a DSL for Bio-Geo-Chemical Models

Type Master Thesis

Task Part of the OceanDSL project is to provide a DSL for biogeochemical models or parts of them. These models can be specified in various ways. Our goal is to provide a concise DSL that allows to create and extend such models. The key tasks in this thesis are:

  • Analyze the domain to understand the how bio-geo-chemical models are researched and developed.
  • Identify parts we can address with a new DSL.
  • Design a DSL based on our technology stack, the stack of Dusk/Dawn or PSyclone, or a combination of those, depending on your findings.

Resources

  • Dusk/Dawn https://github.com/MeteoSwiss-APN/dawn
  • Metos3D
  • PSyclone https://psyclone.readthedocs.io/en/stable/
  • Biogeochemical models
    • Piwonski, J. and Slawig, T. (2016). Metos3D: the Marine Ecosystem Toolkit for Optimization and Simulation in 3-D – Part 1: Simulation Package v0.3.2. Geoscientific Model Development, 9:3729–3750
    • Kriest, I., Khatiwala, S., and Oschlies, A. (2010). Towards an assessment of simple global marine biogeochemical models of different complexity. Progress In Oceanography, 86(3-4):337–360

DSL Designing And Evaluating For Ocean Models

The development of ocean models requires knowledge from different domains. One aspect of the modeling is the model configuration that takes place in code files or parameter lists. The process of configuration of each ocean model is different and their users must know the differences. To make a configuration of the ocean models is easy we can implement a DSL that generates valid configuration files for each model. In this thesis we design and implement a such configuration DSL. Hereby we study the use cases scenarios involving model parameterization and one ocean model. Based on the findings we designed and implemented the DSL. Although the DSL does not generate all configuration files, the evaluation shows that the concept works.

  1. Serafim Simonov. 2020. DSL Designing And Evaluating For Ocean Models. Kiel University. Retrieved from http://eprints.uni-kiel.de/51160/

First Visualization of the UVic Architecture

Our goal is to understand the composition of climate and ocean models to support their modularization and future development. Recently, we applied runtime monitoring on the MITgcm model. Based on our experience there, we applied the technique to the Earth System Climate Model (ESCM) of University of Victoria, Canada. Please be aware that these are very early results and may be erroneous.

The UVic model can be compiled with GNU Fortran (gfortran), but the current setup, we used, only produces a running executable with the Intel Fortran compiler (ifort). Fortunately, ifort support the same interface for runtime instrumentation as gfortran. Thus, we could apply the same probes in this context.

Based on this setup, we recorded 79 GB of binary monitoring data from a partial model run. We aim to have a complete run in future, but for the proof of concept, a partial run is sufficient. For our analysis we aimed to use the standard Kieker trace-analysis tool.

However, the Kieker trace-analysis tool uses call traces to reconstruct the deployed architecture. It is designed that way based on knowledge from web-based and service-oriented services. They have usually a small set of calls in a trace, triggered by an incoming event, message or request. In models, this is quite different. They are called once and run for a long time. Essentially, this results in one big trace. In our case 79 GB trace. This would not fit into memory, and even if, it would be very slow to process. Thus, we created a new architecture reconstruction tool based on another set of Kieker analysis stages. Utilizing this tool, we could generate our first component and operations graphs. The first component graph can be seen below.

UVic architecture based on Kieker monitoring data. Files are considered to be components.

We will continue our analysis to provide better readable graphs.