Quantcast
Channel: PostSharp Blog
Viewing all 419 articles
Browse latest View live

Complex log processing with PostSharp and NXLog

$
0
0

Tracing events is very important for keeping safety in the IT infrastructure, and adding it to even complex systems and maintaining it can be really easy.

In this blog post, we present PostSharp Logging, a .NET library for automatic detailed tracing, and NXLog, a log management system for collecting, processing and forwarding log data, and show how you can use these tools in your applications.

We’ll consider the following scenario. You are developing a large product consisting of ten different applications that communicate with each other over an enterprise service bus or other means. Each application (service) is a standalone program, most are written in C# but some might not be .NET at all. When something happens unexpectedly in such a complex system, how do you make sense of it?

Automatic detailed logging with PostSharp

First, you’ll need logs. You want to know what was happening in each application when an unexpected situation occurred.

Traditionally, you would have your applications create logs by sprinkling the code with logging statements: at the beginning of some methods, in catch blocks, and elsewhere. But you never quite know what information you will need and it is easy to forget this kind of logging.

PostSharp Logging can help here. When you add PostSharp Logging to your application or library, you designate some methods as being logged. You can do it by annotating the method with the [Log] attribute, annotating a class to log all methods in it, or you can use regexes or even C# code— executed at build time — to designate a large number of methods as logged at the same time.

The methods will then print their name, along with parameters and return values, when they’re entered or exited.

In our example app, we’ll use [assembly: Log] to log all methods and we’ll use the following configuration at the beginning of our main method to send all logging events to a local file:

Log.Logger=newLoggerConfiguration().MinimumLevel.Debug().WriteTo.File("single.log",outputTemplate:"{Timestamp:yyyy-MM-dd HH:mm:ss}[{Level}] {Indent:l}{Message}{NewLine}{Exception}").CreateLogger();LoggingServices.DefaultBackend=newSerilogLoggingBackend(Log.Logger){Options={IncludeExceptionDetails=true,IncludeActivityExecutionTime=true}};

This configuration causes would cause all methods in the app to emit logging lines so you’ll end up with a file looking like this:

2020-11-09 13:51:39[Debug]   Program.AcceptRequest(EnterpriseApp.Request = ) | Starting.
2020-11-09 13:51:39[Debug]     Request.get_Index() | Starting.
2020-11-09 13:51:39[Debug]     Request.get_Index() | Succeeded: returnValue = 9, executionTime = 0.01 ms.
2020-11-09 13:51:39[Information]      | Processing request 9.
2020-11-09 13:51:39[Debug]     Program.Enhance(EnterpriseApp.Request = ) | Starting.
2020-11-09 13:51:39[Warning]     Program.Enhance(EnterpriseApp.Request = ) | Overtime: executionTime = 527.79 ms, threshold = 500 ms.
2020-11-09 13:51:39[Debug]     Program.Persist(EnterpriseApp.Request = ) | Starting.
2020-11-09 13:51:39[Information]        | Persisted.
2020-11-09 13:51:39[Debug]     Program.Persist(EnterpriseApp.Request = ) | Succeeded: executionTime = 0.05 ms.
2020-11-09 13:51:39[Warning]   Program.AcceptRequest(EnterpriseApp.Request = ) | Overtime: executionTime = 528.04 ms, threshold = 500 ms.

(In a real scenario, you would only apply [Log] to some methods or you would specify exclusions.)

Collecting the logs

We can do this for each of our services but now each service has its own log file, and possibly each service lives on a different computer, so where do we go to when we need to read the logs?

Here it might be useful to collect the logs from all the services and dump them into some common storage log server. We will use NXLog for this. NXLog is a multi-platform log forwarder, bringing capabilities to collect logs from a variety of sources, and forwarding them to different collectors. In our case, it will run on each of our computers and transmit logs from each service’s “single.log” file to the NXLog instance at the common storage computer, using TCP/IP.

In addition, we’ll have the local NXLog add the computer’s name to each of the log messages so that we know where each log message in the common storage comes from.

We’ll use this nxlog.conf configuration:

define HEADERLINE_REGEX /^(\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2})(\[\S+\])\s+(.*)\|\s+(.*)/
define EVENTREGEX /^(\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2})(\[\S+\])\s+(.*)\|\s+(.*)/s<Extension multiline>
    Module xm_multiline
    HeaderLine %HEADERLINE_REGEX%</Extension><Input input_file>
    Module   im_file
    File     "C:\\app\\logs\\single.log"
    InputType multiline<Exec>
        $Hostname = hostname_fqdn();
        if ($raw_event=~%EVENTREGEX%)
        {
        $EventTime = parsedate($1);
        $Severity  = $2;
        $Method    = $3;
        $Message   = $4;
        }
        $raw_event =  $EventTime + $Severity + "  <" + $Hostname + "> " + $Method + "| " + $Message;</Exec></Input><Output output_tcp>
    Module   om_tcp
    Host     10.0.1.22
    Port     516
    OutputType Binary</Output><Route Default>
    Path input_file => output_tcp</Route>

This means that the local NXLog instance will monitor single.log for changes and each time a new row is added, NXLog will read it, transform it, and send it to our server at 10.0.1.22.

On the storage server, we’ll gather the logs from each TCP connection and dump them into a local rolling file using this configuration:

<Extension _fileop>
        Module xm_fileop</Extension><Input input_tcp>
        Module      im_tcp
        Host        0.0.0.0
        Port        516</Input><Output output_file>
        Module      om_file
        File        "/opt/nxlog/var/log/server_output.log"<Schedule>
                Every 1 hour<Exec>
                        if file_size(file_name()) >= 10M
                        {
                          file_cycle(file_name(), 7);
                          reopen();
                        }</Exec></Schedule></Output><Route Default>
        Path input_tcp => output_file</Route>

This configuration means that we accept input data from all network interfaces at port 516/TCP and write them to a rolling log file — and there are at most 7 log files of 10 MB max each.

Now, when an unexpected situation occurs, you can search for the timestamp in these common files and you have access to what was happening in each service.

Reporting to Kibana

But we can go further.

A common requirement is to be able to monitor the health of running services. Our infrastructure already allows us to add some health monitoring easily by pushing log messages to a database such as ElasticSearch and display them in a Kibana dashboard like this:

The chart on the left represents all logs and the chart on the right warnings only. PostSharp automatically escalates log messages to warning level if a method ends with an exception, or when its execution time exceeds some allotted threshold time.

NXLog Enterprise Edition provides om_elasticsearch, an efficient built-in module to send data to ElasticSearch (more information in the NXLog Documentation) but even using the Community Edition, we can use the HTTP module om_http for such a use case with the following configuration:

<Output output_http>
   Module om_http
   URL http://localhost:9200/
   ContentType application/json<Exec>
	 if $raw_event =~ /^(\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2})\[(\S+)\]\s<(\S+)>\s+(.*)\|(.*)/s
	 {
			 $EventTime = parsedate($1);
			 $Severity  = $2;
			 $Client    = $3;
			 $Method    = $4;
			 $Message   = $5;
	 }
	set_http_request_path( "my_index/_doc" );
	rename_field("timestamp", "@timestamp");
	to_json();</Exec></Output>

In the same file, we’ll need to change the route to output to both targets: the file and Elasticsearch.

<Route Default>
        Path input_tcp => output_file, output_http</Route>

This configuration takes the data coming in over TCP and sends them both to the collected log file and also sends it as JSON objects to Elasticsearch API, registering each log line as a new document (you can of course filter this to send warnings and errors only).

Conclusion

We imagined a complex enterprise application running across multiple servers, and created a system that produces, gathers and presents logs, with only a reasonable amount of work needed.

We used PostSharp Logging and NXLog Community Edition but we have barely scratched the surface of what is possible with these tools.

You can learn more about the possibilities offered by PostSharp Logging and NXLog at their official websites.


PostSharp's Great Reset: Announcing Project "Caravela", a Roslyn-based aspect framework

$
0
0

Today we’re excited to make a one-in-ten-years announcement: we’re releasing the first public preview of PostSharp “Caravela”, a Roslyn-based framework for code transformation and aspect-oriented programming.

We intend PostSharp “Caravela” to become the successor of the MSIL-based PostSharp Framework and PostSharp SDK.

PostSharp “Caravela” builds on 15 years of experience in code transformation and aspect-oriented programming, but has been designed from scratch for C# 9 and modern development pipelines. It has a radically different approach than PostSharp.

Today, we’re demonstrating two components of PostSharp “Caravela”:

  • Caravela.Framework is a high-level aspect framework comparable to PostSharp Framework or AspectJ. This component is in a very early preview and is not considered to be of any commercial use yet.
  • Caravela.Framework.Sdk is a low-level extensibility point to the Roslyn compiler, similar to source generators, but allowing for arbitrary code modifications (instead of just additions of partial classes). This component can be compared to PostSharp SDK or Fody, but using the clean Roslyn code model instead of the arcane MSIL one. This component is already very usable and useful today and is not expected to change much in the future.

In this blog post:

Caravela.Framework: a high-level aspect framework

Caravela.Framework is a code transformation and aspect-oriented programming based on templates written in pure C#.

These templates make it easy to write code that combines compile-time information (such as names and types of parameters of a method) and run-time information (such as parameter values) in a natural way, without having to learn another language or having to combine C# with some special templating language.

Instead of a thousand words, let’s look at this example:

Example: logging

▶ Try in your browser

Here is the aspect code. It represents the code transformation.

publicclassLogAttribute:OverrideMethodAspect{publicoverrideobjectTemplate(){Console.WriteLine(target.Method.ToDisplayString()+" started.");try{dynamicresult=proceed();Console.WriteLine(target.Method.ToDisplayString()+" succeeded.");returnresult;}catch(Exceptione){Console.WriteLine(target.Method.ToDisplayString()+" failed: "+e.Message);throw;}}}

Let’s apply the [Log] aspect to the following method:

[Log]staticintAdd(inta,intb){if(a==0)thrownewArgumentOutOfRangeException(nameof(a));returna+b;}

The following method gets actually compiled instead of your source code:

[Log]staticintAdd(inta,intb){Console.WriteLine("Program.Add(int, int) started.");try{intresult;if(a==0)thrownewArgumentOutOfRangeException(nameof(a));result=a+b;Console.WriteLine("Program.Add(int, int) succeeded.");return(int)result;}catch(Exceptione){Console.WriteLine("Program.Add(int, int) failed: "+e.Message);throw;}}

With Caravela, you can see and debug the C# code being actually compiled. For details, seeDebugging code with Caravela.

Caravela.Framework.Sdk: hack the compiler

At PostSharp we are not fans of hacking because it turns out to be a hassle to maintain in the long term (and our frameworks are designed to make your code more maintainable), but sometimes there may be good reasons to overcome the limitations of the language.

Caravela.Framework.Sdk offers direct access to Caravela’s underlying code-modifying capabilities through Roslyn-based APIs. Aspect weavers written with Caravela SDK can perform arbitrary transformations of the project and syntax trees being compiled.

Example: CancellationToken

▶ Try in your browser

The next example demonstrates an aspect that adds a CancellationToken to your methods declarations and method calls where it is missing.

Because the code of an SDK-based aspect weaver is naturally more complex and would not easily fit in a blog post, please go to GitHub if you want to see the source code of the aspect weaver.

Here is some example input code:

[AutoCancellationToken]classC{publicstaticasyncTaskMakeRequests(){varclient=newHttpClient();awaitMakeRequest(client);}privatestaticasyncTaskMakeRequest(HttpClientclient)=>awaitclient.GetAsync("https://httpbin.org/delay/1");}

What actually compiles is this. You can see that the aspect added CancellationToken parameters and arguments as needed.

[AutoCancellationToken]classC{publicstaticasyncTaskMakeRequests(System.Threading.CancellationTokencancellationToken=default){varclient=newHttpClient();awaitMakeRequest(client,cancellationToken);}privatestaticasyncTaskMakeRequest(HttpClientclient,System.Threading.CancellationTokencancellationToken=default)=>awaitclient.GetAsync("https://httpbin.org/delay/1",cancellationToken);}

Benefits of PostSharp “Caravela” over PostSharp MSIL

PostSharp “Caravela” was designed from scratch. It is based on best lessons learned from PostSharp MSIL during the last 15 years, and addresses the main obstacles that are now hindering PostSharp MSIL.

You will enjoy the following benefits with Caravela compared to PostSharp:

  • Faster builds: Caravela runs directly inside the compiler process (it is a fork of Roslyn), does not require an external process, and is therefore much faster;

  • More powerful transformations: The templating technology used by Caravela allows for more control over code than what is possible with PostSharp MSIL.

  • Better multi-platform support: Caravela does not load the whole project being built in the compiler process, therefore it avoids the cross-compilation issues that have plagued PostSharp for many years;

  • Better design-time experience: You will see introduced members and interfaces in Intellisense because Caravela will do the work at design time and not at post-compilation time. No need for weird casts.

  • Better run-time performance: Because of code generation improvements, you can create aspects that execute much faster.

  • Better debugging experience: You can switch from source code view to transformed code view and debug exactly the code that is executed.

Benefits of PostSharp “Caravela” over Roslyn source generators

Unlike Roslyn source generators, PostSharp “Caravela”:

  • can replace or enhance hand-written code (Roslyn source generators are additive only: you can only add partial classes);
  • allows you to write aspects (or code transformations):
    • in your main project (instead of a separate project),
    • using the C# language, with Intellisense and code validation (instead of building a string);
  • is therefore a real and complete framework for aspect-oriented programming in C#, with the same level of functionality that exists in other languages (such as AspectJ for Java) – which has never been the intent of Roslyn source generators.

Most Anticipated Questions

How long will the MSIL-based PostSharp be maintained?

Our current release plan with the MSIL-based PostSharp is:

  • 6.9 (Q1 2021): addressing performance issues in PostSharp Tools for Visual Studio.
  • 6.10 LTS (Q4 2021): support for .NET 6.

PostSharp 6.10 LTS will be our last supported version of the MSIL-based stack and we intend to support it according to our support policies, that is, 1 year after Caravela reaches a first LTS version. We will work with our customers to ensure the smoothest possible transition.

Will PostSharp “Caravela” be compatible with PostSharp 6.*?

How compatible do we intend to be with PostSharp MSIL? How much code will you need to rewrite?

It has been 12 years since the last major breaking change in PostSharp. Do you remember the .NET landscape in 2008? Clearly, we cannot build a new platform by keeping compatibility with designs that were optimal 12 years ago. However, we understand that PostSharp is used by thousands and we want to find a compromise between modernity and backward compatibility.

We have already taken the following compromise:

  • your aspect code (typically less than a dozen of classes) will need to be totally rewritten,
  • your business code should not be affected.

What will happen with PostSharp Patterns?

We intend to port PostSharp Patterns to PostSharp “Caravela” in a way that maximizes backward compatibility, but we may also take the the opportunity to make a few long-due breaking changes.

How will PostSharp “Caravela” be licensed and priced?

We don’t know yet. The preview releases are being licensed under the terms of the Evaluation License of PostSharp.

Summary

PostSharp “Caravela” is the future of aspect-oriented programming and metaprogramming in .NET. It will take us a long time to get there, but the possibilities are amazing and the path much less rocky than 10 years ago.

For more information, please have a look at the home of PostSharp “Caravela” on GitHub.

If you have any feedback or question regarding Caravela, please open an issue, start a discussion, or contact us directly at hello@postsharp.net.

Argo Data Logging Solution using PostSharp and NLog

$
0
0

This entry will describe how Argo Data used PostSharp Logging to provide logging for many of Argo’s projects and how Argo customized the logging effort to create added value for logging entries in terms of traceability across multiple services. The audience for this entry is architects and developers.

The Problem

At Argo, our solution is composed of several different micro services that work together to produce the end results. A service can call another service which can call another service, and so on, to provide the final payload to the caller. This means that there are many layers of calls with multiple log files and possibly on multiple servers. Tracking a single call through several services can be a daunting task.

The source code is available on GitHub via this link.

Tracking across Layers

In order to track service calls across service boundaries, Argo introduced the concept of the “Unique Id” (UID). This UID originates from the caller that calls the first service in the chain of calls. The caller generates a GUID value and passes it in the header of the call as the UID. A system of logging was then created whereby the UID was extracted from the header at the time the log message was written. This shielded the developer from having to extract the UID from the header for every log message that was written.

When a service had to call another service to complete its process, it passed along the UID in the header of the call. That service would extract the UID from the header and then use that in all of its log file entries. Doing this created a single piece of data (UID) that could be used across all log files to find the entries in a log file related to a specific call.

Prior to the introduction of PostSharp, Argo relied on developers to create log entries. These log entries needed to be meaningful to someone who would be reading the logs and would be able to determine what was happening. This created inconsistent logging that did not always provide the information that was needed to track down issues. What was needed was a way to generate consistent log file entries that provided useful insight into what was happening in the service. These log file entries had to be low maintenance and needed to adapt as code changed over time.

Enter PostSharp

Argo had been looking at the PostSharp solution for some time. When a new project became available, Argo made the decision to give PostSharp a try and to use the new project as a “Proof of Concept” for using PostSharp as its logging solution. Argo also made the decision to use NLog as its backend logging solution and to write services in NET Core.

Injecting the Unique ID into PostSharp logs

The UID is an important piece of data for Argo as explained above. What was needed was a way to inject the UID into every call related to a given service call without having to write code over and over again. After some discussion with the PostSharp team, they recommended logging within the context of an activity. That is discussed below.

Middleware

Net Core has the concept of Middleware. Using Middleware, developers can intercept a call and then make decisions before processing the call forward. Argo made use of this technology, writing custom middleware, as a way to intercept the call and provide the start of the logging activity for that call. Setting this up is as easy as adding a “use” clause in the Configure method in the startup file. (ServiceTraceMiddleware is the custom class that handles this middleware functionality.)

app.UseMiddleware<ServiceTraceMiddleware>();

In the middleware, several steps take place to set up the logging and inject the UID into the logging calls that are generated by PostSharp.

  1. Extract the UID from the header
  2. Create an instance of LoggingPropertyData (class) and assign it the value for the UID. (This class has a property called UniqueId.)
  3. Create an activity and provide the properties to log.
  4. Move to the next call in the series
  5. Once the call returns, set the outcome to complete the activity.

Here is the code to do that.

varloggingPropertyData=newLoggingPropertyData{UniqueId=uniqueId};OpenActivityOptionsoptions=newOpenActivityOptions(loggingPropertyData);using(varactivity=_logSource.Default.OpenActivity(FormattedMessageBuilder.Formatted("Start request"),options)){// Do logging or whateverawait_next(context);// Move to the next item in the pipeline// Do more stuff if desiredactivity.SetOutcome(PostSharp.Patterns.Diagnostics.LogLevel.Info,FormattedMessageBuilder.Formatted("Request Completed."));}

The variable _logSource is defined as a static variable on the class as follows. This is the PostSharp LogSource instance.

privatestaticreadonlyLogSource_logSource=LogSource.Get();

Extracting the UID

Getting the UID into the logging messages doesn’t do any good if it isn’t written out to the log files. To do this, a custom backend and record builder were needed. Since PostSharp already has a backend for NLog, all that was needed was to inherit from that and add the code needed to extract the UID and pass it into the logging mechanism.

The work for this is found in overriding the Write method. In that method, the VisitProperties method is used to find the UniqueId property and set that as a property on the backend logger.

protectedoverridevoidWrite(UnsafeStringmessage){try{stringuniqueId=string.Empty;varlog=((NLogLoggingTypeSource)TypeSource).Logger;this.Context.VisitProperties((stringname,objectvalue)=>{if(name=="UniqueId"){if(value!=null&&!(valueisstrings&&string.IsNullOrEmpty(s))){uniqueId=value.ToString();}}});log.Properties[Constants.UniqueIdKey]=uniqueId;}catch(Exceptionexception){Debug.WriteLine(exception);}base.Write(message);}

In this code, the VisitProperties looks for the property UniqueId. Recall that this is the name of a property found in the LoggingPropertyData class that was instantiated and assigned in the middleware. If found, it sets the local variable uniqueId to that value. The local variable is then used to set the property on the logger that does the actual logging.

Rendering UniqueId

The final step to this includes writing some code to process the uniqueId value so that it is formatted and then some configuration so that it gets written to the logs. This part isn’t specifically PostSharp related code but it does demonstrate how to take the property value and write it out to the log file without having to copy/paste code everywhere. Other backend logging systems may have ways of accomplishing the same thing but this example does so using NLog.

NLog allows developers to extend the LayoutRenderer class and then override the Append method in order to create a Render item that can be inserted into the nlog config file. In this case, a new LayoutRenderer object is created that is labeled “UniqueId”. In the Append method, a check is made for the uniqueId property in the log event. If found, it is reformatted so that all the “-“ characters are removed providing a single string of 32 characters. If it is not found then the value is padded with all spaces. This is useful when writing to the log file as it creates a more symmetric appearance.

In the NLog config file, the “UniqueId” renderer property can be inserted into any layout where desired. In the case of the trace file, the layout is structured as follows.

layout="${longdate}|${var:threadid}|${UniqueId}|${uppercase:${level}}|${logger}|${message} ${exception:format=tostring}"

Object Logging

Out of the box, PostSharp will generate logging that will write out the inputs and outputs from methods that are configured to have logging. (Typically public and protected methods but it depends on what is defined in the GlobalAspects file.) However, if an object is being passed in or returned, PostSharp will render the results of the ToString method on that object. Therefore, instead of seeing the data passed in or returned, the class name as rendered in the log message as that is the default behavior for the ToString method. In order to make this more useful, override the ToString method and return the values of the properties in the object. This is very easy to do using JSON serialization of the object.

returnJsonConvert.SerializeObject(this);

This is very convenient but be aware that this can expose Personal Identifying Information (PII) data in the log files. This is acceptable in a debug build when developing but not for a release build that will be deployed to a customer’s site. To avoid putting PII data in the log files, clone the object, mask the PII data, and then write it to the log file. Argo uses a NuGet package called ObjectCloner to quickly clone an object. Here is a sample of how that works. In the example, the method ScrubData.Obscure replaces PII data with an asterisk (*).

MyProjectDTOx=this.DeepClone();x.PIIData=ScrubData.Obscure(x?.PIIData);returnJsonConvert.SerializeObject(x);

Using conditional compile directives, the debug build can render everything while the release build can mask sensitive data.

Conclusion

Argo’s use of PostSharp has proven to be a smart choice. Since that initial POC project, PostSharp has been integrated into several Argo projects. The logs that are being generated have proven useful in determining the state of a service and in troubleshooting problems.

 

 

About the author, Randall Woodman

Randall Woodman

Randall Woodman is a software developer working at Argo Data in Richardson, TX. He is a US Navy Veteran who got his BS degree in Computer Science after leaving the military. He has been developing professionally since 1994 with several companies in the Dallas, TX area. Randall is also an avid gamer with his current favorite games being Forge of Empires and World of Tanks Blitz. | LinkedIn

Announcing PostSharp 6.9 RC: Visual Studio Tooling performance improvements

$
0
0

We are happy to announce that PostSharp 6.9 RC is available today. In this release, the performance of PostSharp Tools for Visual Studio has been improved.

This version is available for download on our website and the packages for 6.9.2-rc are now available as prerelease on NuGet.

PostSharp Tools for Visual Studio is a part of the PostSharp ecosystem since PostSharp 2.0, when we extended Visual Studio 2008 and 2010. That made projects enhanced by PostSharp more understandable, easier to debug and UI became available for application of aspects provided by the PostSharp Pattern Libraries. Since then, we’ve keept pace with the development of Visual Studio itself and we’ve supported each succeeding Visual Studio version, incorporating many of the features that became available for the Visual Studio extensibility.

Support for a new Visual Studio version requires a certain amount of code refactoring and maintaining an abstraction layer that lets us keep the source code shared among the different extension versions as much as possible. Sometimes, this leads to having obsolete code and suboptimal performance caused by use of legacy APIs.

In PostSharp 6.9, we’ve focused on code cleanup and performance optimization. Less time is required by the extension to obtain the information and to show all the important data about the source code. Additionally, it doesn’t block the user interface more often than required by the Visual Studio Extensibility SDK. All that improves the overall user experience when developing PostSharp-enhanced projects.

Now let’s have a look at how exactly the PostSharp Tools for Visual Studio enrich the development experience and which questions about PostSharp-enhaced source code they answer.

Which code elements are enhanced by PostSharp?

Each code element enhanced by PostSharp is underlined.

Each code element enhanced by PostSharp is underlined.

Which PostSharp aspects enhance my code and how?

Aspects applied on code element are shown on mouse hover.

Hovering the mouse over any of the code elements enhanced by PostSharp shows an information box. It shows which aspects enhance this particular code element, which effect they have on the code element, and how the aspects were applied. Additionally, all types listed in the information box provide links to their source code, where available.

Which code elements are enhanced by a given PostSharp aspect and how?

The PostSharp Explorer toolbox.

The tooling comes with PostSharp Explorer toolbox consisting of three views. The first one shows the tree of all aspects used in the solution. Selecting one of the aspects shows a list of code elements that the aspect enhances in the second view. Selecting one of those code elements shows how the selected aspect enhances the selected code element in the third view. Double-clicking on the aspect, or on the code element navigates to its declaration in the source code, where available. More features can be found in the context menus. The PostSharp Explorer toolbox can be opened via the (Extensions >) PostSharp > PostSharp Explorer Visual Studio menu.

How can a code element be enhanced by PostSharp?

PostSharp code actions.

Most of the aspects provided by the PostSharp Pattern Libraries, like Logging, Caching, Threading, and others, can be applied using the code actions menu. Select a code element in the source code and open the code actions menu either by clicking on the light bulb or screwdriver icon next to the code line or by pressing the ctrl+. shortcut. The menu contains code actions for applying aspects available for the selected code element. Selecting one of the aspect-related menu items shows a preview of how the aspect can be applied. Clicking on it automatically performs the code change. If the aspect requires any configuration or if required NuGet packages are not yet installed in the project, a wizard window pops up and it manages all the necessary steps. The list of aspects available in this menu can be configured via the (Extensions >) PostSharp > Options > Code Actions Visual Studio menu.

How can a project or a solution be enhanced by PostSharp?

Solution Explorer context menu items.

In the project or solution context menu in the solution explorer, use the Add > PostSharp policy context menu item to apply an aspect to the whole project or solution. If PostSharp NuGet package is not yet installed in the project, it can be installed using the Add PostSharp to project context menu.

Enhanced debugging experience

Enhanced debugging experience.

Having the PostSharp Tools for Visual Studio installed allows to step into or over aspects and it improves the readability of the call stack shown in Visual Studio while debugging. The debugging experience can be customized via the (Extensions >) PostSharp > Options > General > Debugging Visual Studio menu.

Setting PostSharp-specific project and solution properties

PostSharp-scpecific project properties tab.

PostSharp tab can be found in the project and solution properties windows allowing to adjust multiple PostSharp settings.

Global PostSharp
and PostSharp Tools for Visual Studio
settings

PostSharp options window.

Code Actions, License, Error Reporting, Customer Feedback, and other options can be configured via the (Extensions >) PostSharp > Options Visual Studio menu.

Other features

PostSharp menu.

More features, like learning and documentation resources, customer feedback and support, or solution-wide code enhancement metrics can be found in the (Extensions >) PostSharp Visual Studio menu.

Summary

PostSharp Tools for Visual Studio improve the overall development experience when using PostSharp and Visual Studio for software development. It allows to better understand the PostSharp-enhanced source code, apply the PostSharp ready-made aspects automatically and it improves the debugging experience.

In PostSharp 6.9, the performance of the Visual Studio extension is improved significantly.

As always, it is a good time to update your VS extension and NuGet packages, and report any problem via our support forum.

Happy PostSharping!

Announcing PostSharp 6.9: Visual Studio Tooling performance improvements

$
0
0

Just 2 weeks after releasing PostSharp 6.9 RC, we are excited to announce the general availability of PostSharp 6.9. This version is available for download on our website.

In this release, the performance of the Visual Studio extension has significantly improved. We have performed a complete review of our VSX code against modern performance best practices and removed all code that was initially written for VS 2015 and older.

If you have installed PostSharp Tools for Visual Studio, you may have to download the new release and give it another try.

For more details, please read the 6.9 RC announcement.

P.S. If you run into any issues, do let us know via support forum.

Happy PostSharping!

Announcing PostSharp "Caravela" Preview 2 (0.3.5)

$
0
0

We’ve made it! The Roslyn-based “Caravela” can now implement INotifyPropertyChanged and aspect-oriented way and you can try it in your browser. But watch out! You can run with a knife that’s in preview, but not in production code.

If you haven’t heard from us for three months, it’s because we have been badly hit by the COVID/lockdown mess in April. We’re now working with a smaller team, have more energy than ever, and we have chosen to focus it on our top priority: code, not words.

Today, we’re excited to announce the second preview of PostSharp “Caravela”, our new Roslyn-based meta-programming framework for aspect-oriented programming, code generation and code validation. PostSharp “Caravela” is to become the successor of the MSIL-based PostSharp Framework and PostSharp SDK.

Whereas the first preview, announced six months ago, was merely a proof of concept, the current preview is built on the final architecture of the product. It implements the most useful aspect-oriented features and a large part of the C# language. As an early preview, however, it is still unsuitable for production use. The most noticeable gap is the lack of support for async methods and the poor handling of warnings, including nullability warnings. Caravela still does not cover all the features of the “old” PostSharp, so if you’re excited to port your aspects to the new stack – wait.

That said, Caravela is already a wonderful playground. It already implements a load of feature.

Aspect-oriented features

There are a lot of features here and illustrating them all would be long, so I invite you to follow these links to see examples and explanations.

Example: INotifyPropertyChanged

Here is an aspect that implements INotifyPropertyChanged and intercepts all property setters. Try it in your browser. You will get meta syntax highlighting as a bonus.

usingCaravela.Framework.Aspects;usingCaravela.Framework.Code;usingSystem;usingSystem.Linq;usingSystem.ComponentModel;namespaceCaravela.Samples.NotifyPropertyChanged{classNotifyPropertyChangedAttribute:Attribute,IAspect<INamedType>{publicvoidBuildAspect(IAspectBuilder<INamedType>builder){builder.AdviceFactory.ImplementInterface(builder.TargetDeclaration,typeof(INotifyPropertyChanged));foreach(varpropertyinbuilder.TargetDeclaration.Properties.Where(p=>!p.IsAbstract&&p.Writeability==Writeability.All)){builder.AdviceFactory.OverrideFieldOrPropertyAccessors(property,null,nameof(OverridePropertySetter));}}[InterfaceMember]publiceventPropertyChangedEventHandlerPropertyChanged;[Introduce(WhenExists=OverrideStrategy.Ignore)]protectedvoidOnPropertyChanged(stringname){meta.This.PropertyChanged?.Invoke(meta.This,newPropertyChangedEventArgs(name));}[Template]dynamicOverridePropertySetter(dynamicvalue){if(value!=meta.Property.Value){meta.Proceed();this.OnPropertyChanged(meta.Property.Name);}returnvalue;}}}

Design-time features

Live templates

Creating live templates, i.e. complex code transformations, like aspects, but that are executed from the lightbulb menu in the editor and applied to your source code.

Screenshot

Syntax highlighting of aspects

If you install our Visual Studio extension, you will get additional syntax highlighting for template code: compile-time code will be displayed on a grayed background, while run-time code will be displayed normally.

Here is an example:

Screenshot

Testing

We have built a dedicated xUnit-based framework to test aspects. A test constitutes of at least two files: an input file, which corresponds to the source code, and an output file, which contains the expected transformed code. The test consists in comparing the expected transformed code with the actual code, as transformed by the aspect.

Samples and Documentation

Summary

We have reached an important milestone with PostSharp “Caravela”. We are now working on stabilizing the product and continuing to build the most important features. Until it’s done, we will continue focusing and code instead of words.

For feedback and questions, please use our GitHub discussion board.

Happy PostSharping!

-gael

UPDATE: Fixed the code example (thanks DomasM).

Announcing PostSharp 6.10 Preview: Support for .NET 6.0, Visual Studio 2022, and C# 10

$
0
0

PostSharp 6.10 Preview is now available for download as a preview release on NuGet. You can also download PostSharp Tools for Visual Studio 2022 from our website.

PostSharp 6.10 Preview supports .NET 6.0 RC2, C# 10 and Visual Studio 2022 RC2.

PostSharp 6.10 will not get any new feature besides supporting the new Microsoft platform, as we are focusing on the next generation of our meta-programming platform, Project “Caravela”. PostSharp 6.10 will be a long-term supported (LTS) release, pinned to NET 6.0 LTS, replacing PostSharp 6.5 LTS (published in March 2020).

If you are planning to update your projects to .NET 6.0 soon, it’s a good idea to try to update to PostSharp 6.10 and report any problem via our support forum so we can strike when the iron is hot.

Happy PostSharping!

-gael

Announcing PostSharp "Caravela" Preview 3 (0.4)

$
0
0

A new preview of PostSharp “Caravela”, our new Roslyn-based meta-programming framework, is now available. Caravela is the successor of the MSIL-based PostSharp Framework. It should work with most codebases, support the complete C# 9 language, and has a very respectable set of aspect-oriented features. This preview is the last we’re releasing under the codename “Caravela”. You can try Caravela today on sample projects, but it is not recommended to use it in production projects.

The following resources are available:

Please subscribe to our newsletter to stay informed about our progress with this project.

What works

Overriding methods

Override any method with a simple code template:

The template will also work on async methods and iterators:

Overriding properties

Override the implementation of a field or property. If the examples above were annoyingly simple, here’s a more complex example that demonstrates the implementation of that and shows how to automatically generate code that calls a service locator.

Introducing members and implementing

Your aspect can generate new methods, properties, fields or events. It can make the target type implement a new interface. The following example goes further in complexity and implements the deep cloneable pattern.

Authoring complex code templates

Code templates can have compile-time conditions, loops, variables, lambda expressions, and more. Code template can contain dynamic code to bind to the target code.

Even if the features of the template language are impressing, there will be times when it will be more convenient to generate code using an interpolated string, a StringBuilder, or a similar classical mechanism. For these situations, you can parse any string containing C# code into an expression or statement, then use it in C# just like other expression.

The template language offers helper classes to generate run-time expressions like arrays or, as in the following examples, interpolated strings:

You can easily convert compile-time objects, such as collections, intrinsic types or reflection types, to C# expressions. You can even define custom converters for your own classes. The following example demonstrates the conversion of the system type Dictionary and a custom type. It also shows how to programmatically generate expressions.

Defining eligibility

Aspects can define onto which declarations they want to be applied.

Reporting and suppressing diagnostics

Aspects can report warnings and errors. They can also suppress warnings reported by the C# compiler or other analyzers. In this example, we revisit a previous example and add some validation.

It’s perfectly fine to create an aspect that only analyzes the code and reports diagnostics, without transforming the code.

Adding aspects in bulk using fabrics

If you don’t want to add a custom attribute on each method to add your logging aspect, no problem. Fabrics have you covered. In the following example, we are adding the Log aspect to all methods of the current project.

Configuring aspects

You can create a configuration API for your aspects or just consume MSBuild properties.

Type fabrics

Without defining an aspect class, you can programmatically introduce new members, override existing members or report warnings in the current type by just defining a nested class.

Performing arbitrary modifications to code

Caravela’s philosophy is to offer a well-mannered API to express code transformations safely, without changing the code semantics, and in a composable way (which means that you can safely add to the same declaration several aspects that don’t know about each other). However, if you feel brave, you can shortcut the safety features and implement your code transformations directly using the Roslyn API. Anything you could do with an IL weaving tool like Cecil, Fody, CCI or PostSharp can now be done with Caravela. The documentation of this feature is severely outdated but you can have a look at the ConfigureAwait example.

Modifying source code with an aspect at design time

All of the examples above only modify intermediate code, not the source code, which keeps things clean and readable. We call this a live template because an aspect, by definition, does not modify the source code. However, any aspect can be used as a live template.

Live template

Previewing the transformed code

You can compare the source code with the code being executed.

Diff

Syntax highlighting of aspect code

Aspects are like code templates that mix run-time code with compile-time code, but without any markup tags. To help you understand which expressions and statements are compile-time, and which are just normal, we built a Visual Studio extension that colors the aspect code.

Diff

Debugging the source or transformed code

You can choose to debug the source or transformed code.

Debugging

Testing aspects

You can test your aspects with exactly the same testing framework we use internally. A test case generally includes a source file and an expected transformation file. The test succeeds if your aspect transformed the source file into the expected transformation file.

Debugging

What does not work yet

Although the list of features that work is already long, now is not the time to use Caravela in your production projects:

  • Licensing is not yet implemented. You cannot buy it, but you can use it for free under the evaluation license.
  • It’s not the final name! “Caravela” is a code name, and we still need to reveal the final name and rename all packages and namespaces.
  • Logging and telemetry is not yet implemented, so we cannot assist users in troubleshooting, and cannot pro-actively debug.
  • It’s largely untested with large projects and solutions – mostly just unit tests and sample projects.
  • Aspect initialization is not yet implemented.
  • You cannot yet add aspects to operators and constructors.
  • There are some gaps in the design-time experience.
  • We still want to add more features into code validation and design-time code generation.

Summary

This is the last preview of “Caravela” under the project codename and there is almost a stack overflow of features! In a couple of weeks, we will reveal the final product name and a few licensing options. All I can say for now is that a lot of these features will be available for free for everybody – from individuals to corporations.

We’d love your feedback on GitHub, Gitter or as a comment on this page.

Please subscribe to our newsletter to stay informed about our progress with this project.

Happy PostSharping!

-gael


Announcing PostSharp 6.10 RC: Support for .NET 6.0, Visual Studio 2022, and C# 10

$
0
0

PostSharp 6.10 RC is now available for download as a preview release on NuGet. You can also download PostSharp Tools for Visual Studio 2022 from our website.

PostSharp 6.10 RC supports .NET 6.0, C# 10 and Visual Studio 2022.

PostSharp 6.10 will not get any new feature besides supporting the new Microsoft platform, as we are focusing on the next generation of our meta-programming platform, Project “Caravela”. PostSharp 6.10 will be a long-term supported (LTS) release, pinned to NET 6.0 LTS, replacing PostSharp 6.5 LTS (published in March 2020).

If you are planning to update your projects to .NET 6.0 soon, it’s a good idea to try to update to PostSharp 6.10 and report any problem via our support forum so we can strike when the iron is hot.

Happy PostSharping!

-gael

PostSharp 6.10 now available: Support for .NET 6.0, Visual Studio 2022, and C# 10

$
0
0

PostSharp 6.10 is now generally available. It brings support for .NET 6.0, C# 10 and Visual Studio 2022 released less than 1 month ago. You can update your NuGet packages and your Visual Studio extension.

We could not implement all features of .NET 6, however. ARM machines are not supported as development machines or build agents, only as run-time platforms. If you are affected by this limitation, please reach out to us and we will discuss your situation personally.

PostSharp 6.10 is a long-term supported (LTS) release, pinned to .NET 6.0 LTS. As an LTS release and according to our support policies, we will support it until one year after our next LTS release. We will still provide support and bug fixes for PostSharp 6.5, the previous LTS version, until December 2022.

Most of our development efforts now go the next generation of PostSharp, a technology called Project “Caravela”. You should hear more from us about this project very soon.

Happy PostSharping!

-gael

Webinar: Hacking C# with Adam Furmanek

$
0
0

Adam Furmanek shows how to hack the C# language and the .NET runtime: calling a specific override of a virtual method, calling assembly instructions provided as a byte array, replacing a method by another one, or dynamically changing the type of an object.

We’re publishing the recording and the transcript of the webinar that went live on January 20th.

Transcription

Gael: Hello again, everyone. And thank you for joining us today for this webinar. This is Gael Fraiteur, and today I’m joined by Adam Furmanek.

Adam: Hey everyone.

Gael: Adam is a software engineer at Amazon. Adam is the author of Applied Integer Programming and .NET Internals Cookbook. Adam is also a speaker. He Spoke at [00:00:30] NDC, .NET [inaudible 00:00:32] and other international conferences. And today Adam will be talking about hacking C#.

Just a quick note, before we begin. Please use the Q&A feature to ask any questions you may have during today’s live webinar, and we’ll be sure to address them at the end of the presentation. We’ll follow up offline to answer all the questions we won’t be able to [00:01:00] answer during this live webinar. We are also recording the webinar and everyone will receive a link to the recording via email. Okay, let’s talk about hacking C# from the inside. Adam, welcome.

Adam: Okay. Thank you for this introduction, Gael. Let me just share my screen and hopefully it works so we can now begin. Hello everyone. As mentioned, [00:01:30] I’m Adam Furmanek and I talk about .NET internals quite a lot. And if you go to YouTube, you can find quite multiple talks about how to hack memory, how to control garbage collector, how to do all the nasty stuff, move machine code directly in C#. However, what I typically do is I just show the business purposes or how to use, how to do those things for some other reason, for some other goal. However, Kevin Gosse, which you probably [00:02:00] know always asks me, “Hey, how do we actually do all those things?” And this is how this talk started. He pointed me that even though I do use plenty of those nice techniques, I do not necessarily explain them. I just use them to do some other things.

So today it’s going to be a little different. What we are going to do today is we’ll learn how to do, how to implement those building blocks for hacking things under the hood, in .NET platform, in C# language directly. [00:02:30] So we’ll see how to generate machine code, how to abuse the type system. We’ll see how to wrap windows APIs to do some fancy stuff. All of that, we’ll just learn how to implement those things. I won’t be focusing necessarily on some business use cases. So this talk is not necessarily that you take all the things here and start applying them in your day to day production code tomorrow. However, what we’ll learn is how those things work internally, how they are implemented and how [00:03:00] we can use them and abuse them. So the agenda for today’s talk is, we will see quite a multiple examples, not necessarily related to each other.

Some of them are like just pieces of code you write directly in C# which are working on the C# level. However, some of them will be actually interrupting to some low level scenarios like operating system, or machine code or CPU, et cetera, et cetera. [00:03:30] So we will go through those multiple areas. We’ll start with something very high level like how to avoid dynamic dispatch, or how to catch async, exceptions from async void method or how to actually await asynch void methods. And then we’ll move on to some other things playing with memory directly. We’ll see how to generate a machine code and execute it from a byte… Array of bytes. Or how to handle StackOverflowException or do other stuff, which we do not necessarily need to do just [00:04:00] for the sake of doing it. But for some other purpose, which we will not necessarily be covered in today.

So let’s begin. Let’s begin. The first thing we’ll actually see is how to avoid dynamic dispatch. Just to get like a quick warm up before we do some more low levelish thingies. So the dynamic dispatch like the pillar of object oriented programming is that whenever you do have a base class and some other classes inheriting from the base class, and then when you create the instance of [00:04:30] the… Sorry. Of the… My PowerPoint is not necessarily too fast today. Okay, go back.

When you create an instance of the derived class, but assign it to the base type, and when you call a method, which is a overloaded method, which is like a polymorphic one, you expect this method to be executed against the actual instance, which is stored in the variable, right? So even though this new derived is assigned to the variable [00:05:00] of type of base, whenever we call foo we actually expect it to call the foo from the derived2, because that’s the instance we are talking about here.

So we expect to see the derived2 value printed to the screen. However, if we did not have the polymorphic invocation, what we would end up with is the base method being code. So we would just call the base for implementation and see base printed out to the screen. This is because that the new derived is [00:05:30] on static time in compilation time assigned to a variable of type base. Right?

However, the question now arises, can we control this mechanism to some extent? Can we actually call b.foo and decide whether we would like to have foo executed from the base class, from derived2 class, or maybe from somewhere in between, like the derived class. And the answer is yes, we can do that. The thing is that the way we do it is we need to understand how are those things compiled under the hood.

[00:06:00] So generally whenever you get down to the intermediate language level, what you end up with is you have… You do have these two different instructions of how you can actually execute the code. So you do have the call instruction and you do have callvirt instruction. The callvirt instruction is used to do the dynamic polymorphic invocation, right? So callvirt instruction calls this late-bound method on the object.

So we just take the object [00:06:30] time at the run time and decide which method to call. On the other hand, there is this call instruction. What it does, is it calls the method like in a static manner. Meaning that whatever we do provide to the call instruction to be executed is directly executed no matter whether it’s overloaded or not. Because we do not consider the run time inheritance hierarchy in this part. So the way we can actually achieve this dynamic invocation or [00:07:00] avoid dynamic invocation is we can control which instruction we use, whether it’s call or callvirt.

And the way we’ll do is, we’ll actually go to the first example of the day. So we do have this class hierarchies, which we seen in the slides. So there is derived2 which inherits from derived, which in turn inherits from base. We do have virtual foo, which is overridden twice in both of these classes. And now what we do is we are creating a helper method called invoke method, [00:07:30] which accepts one generic parameter of which class we would like to use to actually execute this line of code. See that invoke method accepts an expression of action, not the action itself. So this lambda is not being executed at all. It’s merely like as a recipe of what code we would like to execute, of what we are trying to run. So we take this lambda. And the first thing we are doing is we check whether this lambda is an expression call.

If it’s not, then we actually [00:08:00] throw the exception, right? That’s the only place in this code where we do not have the compile-time safety, right? However, whatever else we do is compile-time safe, meaning that we cannot provide here invalid arguments, we cannot provide strings here. All we do is captured and checked by the compiler, even though we are trying to abuse those pieces a little. So we do take this lambda [00:08:30] out after we check whether it’s expression code, what we do next is we cast it and then we get all the arguments and finally get the method which we would like to call, okay? Ultimately we get the method from the type of type of T, which we provide as the… Sorry. Sorry of that. Which we provide as the generic parameter here.

What we do next is we get the lambda, which calls the method to be executed. [00:09:00] So what we get is we get ilgenerator, we start emitting the code, we get the arguments one by one. We put them to the stack. And ultimately what we do at the very end is we emit the instruction, which is call not callvirt. So when we get this thingy the call instruction is being executed. And this way we can avoid the whole dynamic dispatch. We’ll get to the code, which is executed statically, meaning that we provide which class to execute. [00:09:30] So this is the way how we can do this. And it’s all directly written in C# as you can see. There is no magic happening under the hood. Moving on to another example. So this is how we do this dispatch. Now we would like to do something else still in C# level.

We won’t be dwelling into the machine level code yet. What we would like to do is we would like to await the async void methods. Okay? So what we have with async void, the problem is we [00:10:00] do not have a thing which we can await, right? Whenever we do have async methods, they typically return task or return whatever else, so we can have something which is awaitable.

On the other hand, async void methods, they do not return things like this. So it’s much harder actually to see what we could await here, right? However, instead of awaiting them directly, what we can do is we can implement a custom synchronization context, which will [00:10:30] do the magic and await the method for us. In order to do that, if you would like to understand all the things which are happening here, I refer you to the talk I gave, which is called Internals of Async. Where you can see all the bits and bytes of the synchronization context.

However, what we will do today is we’ll just see how to implement this thing. Okay? So we will close this demo. We’ll move on to [inaudible 00:10:53]. To another one. Which is catch async void. And what we are going to do in this demo is we would like to start [00:11:00] with a piece of code like this method, which is an async void method. And it just prints something out, waits for a second and then throws the exception. Okay? If you do know how async works, you probably remember that throwing the exception from an async void method is not a very good idea because this most likely will crash your application. So it’ll get terminated. So we would like to not only to await for this method, but also we would like to catch the exception. So it doesn’t [00:11:30] break the application. So what can we do?

So first thing we need to do is we implement our custom task scheduler. So we’ll have a piece of logic, which takes tasks one by one, and manages how they are scheduled, how they are executed. So we’ll have a task scheduler, which will keep just this one little collection of all the tasks, all the continuations we would like to execute one by one. Okay? Having this task scheduler, what we do next is we implement our custom synchronization context. [00:12:00] And this synchronization context will first use… Sorry, create the task factory, which will use our custom task scheduler instead of the default one. So it’ll hide the scheduler, which is provided by the .NET platform and use our custom to manage all the tasks. Next, what we do is we have two different methods called post, post and send to handle all the things which are scheduled via the synchronization context.

And finally, what we do is we just create [00:12:30] a helper function, which triggers the whole code. So what we will do is instead of calling this async void method directly, we’ll actually run it within our custom synchronization context called, MyContext in here. So what this method does is it takes the lambda which we would like to execute. It first captures the current synchronization context and creates the custom one, our own synchronization context to handle all [00:13:00] those things.

Then what we do is we replace the synchronization context on the thread and we iterate through all the tasks, one by one to execute all of them in order. So we get all the continuations. Whenever you have task, whenever you have continue with whatever, you do execute it here, one by one, and you await for the results. Finally, at the very end, you just restore the synchronization context on a thread. On the thread you are executing [00:13:30] on.

And the important remark here is that we just run it within task.run method, which gives you and returns you the task you can actually use. So now when you have this task returned from here, you can actually get it and do whatever you wish. You can await it. You can call .wait. You can do all the stuff you would do with your regular tasks, like continue with, et cetera, et cetera.

So having all those things, what we can actually [00:14:00] see is that this… Sorry, I did not set the correct project to run. So this [inaudible 00:14:07]. So whenever we do run this node, you can actually see that it is being run in the try-catch thingy. So we can see that not only it does not crush at the very end, because it just swallows the exception as you can see here, but also you can get all the things, [00:14:30] all the benefits of awaiting the task.

So you can clearly see that yes, we managed to await, we managed to handle the exception and our application still works. So this is how you can actually await all the async void methods and continue safely with your application.

Okay. So moving on. Those two things were the examples of how we can do fine stuff just in C#. Let’s return to the slides. And what we are going to do next is we’ll [00:15:00] just move on to running any machine code from an array bytes in C# sharp application. So the important thing before we get to the machine code is we need to understand how those things work in C#, in .NET platform in general. So when it comes to functions, they are JIT-compiled typically, however they can be pregenerated, meaning compiled in ahead of time manner. Which you typically do using ngen or ready to run [00:15:30] mechanism in .NET Core.

If you ever wondered, why is your do .NET Framework, after you install your .NET Framework or do .NET Core or whatever .NET runtime, why is your CPU eating like 100% of its power for couple of minutes? That is because right after you install it, it starts compiling all the framework functions to the machine code capable of being executed directly on your CPU. That is because once you install the .NET, [00:16:00] you do know what CPU you have, what machine you are running on. So you know how to optimize things, what CPU instructions you have, et cetera, et cetera. So you can compile the code and have it right pregenerated in the machine form. However, for most of our applications and most of the code we write in C#, we cannot do that.

So it is delivered to our customers just as a simple intermediate language function, which was compiled from a C# code with the C# compile, right? [00:16:30] So once we start the application, we need to compile this intermediate language code to the actual machine code running on the machine. And this is what just-in-time compiler does. Now, when we are talking about compilation from intermediate language to some machine code, we need to deal with multiple other low level things, which we typically do not think of when we are just writing C#, right? Those things include calling convention. Those things include how parameters are passed, [00:17:00] whether it’s via registers, whether it’s via stack, whether it’s via well known location, global variables or whatever else. You need to deal with how those things return values, who cleans the stack, who serializes those values.

If we are talking about any marshaling or whatever else. So generally there is quite a lot, quite many things we need to deal with, we need to handle in order to get those things rolling. And this is what just-in-time compiler does [00:17:30] for you. The other thing which is very important here is that every single function has a thing which is called a method descriptor or a method handle. There are actually multiple names for the same thing here. So the method descriptor is a piece of metadata, which the .NET runtime uses to actually describe functions for you whenever it’s needed.

Like whenever you use reflection, when you ask for the function parameters, function generic type, function return [00:18:00] type, whether it’s overloaded, overridden, final whatever, it’s all stored in the method descriptor, in the metadata. So this is something which is held and managed by the .NET platform.

And we can get an access to that. We can read those things and actually start playing with them, doing some fancy stuff as we will see in the moment. However, this is about .NET functions. Now the thing is, the machine code, which is at the very end is not [00:18:30] an assembly code. We won’t be generating like the assembly language here. So we won’t be using nice up codes like move, like push, like pop, whatever else. Because we are just a one level below that. Assembly language is a bit, a piece of mnemonics and all the other instructions which then need to be translated to actual numbers. To actual numbers of the instructions we would like to execute. And we are talking about the [00:19:00] architecture x86, actually 32 and 64-bit on this machine. We’ll be mostly running with 32-bit examples, but they do work in 64-bit the same way.

However, all those examples, conceptually and technically will work on other architectures as well. If we are talking ARM here, PowerPC, whatever you wish. Those examples conceptually can be translated to other architectures, but here we’ll be generating the machine code, which is directly and strictly for x86 architecture. [00:19:30] So when it comes to machine code, it’s generally a bunch of data. You cannot discern one from another because in our architecture, the way we implement our computers, data and code are stored in the same memory space.

So we can consider the same bytes as being data or code, whatever you wish. You can actually go onto an internet and look for like bitmap graphics, like BMP file, which is both an image, like [00:20:00] a nice image, and an executable application. So we can find a file, which you can see in Microsoft Paint or just execute because it’s all the same.

It’s just a bunch of bytes, which we can interpret and use in different multiple ways. However, when talking about machine code, we also need to think about the operating system. So we need to handle the security, whether page is executable, whether the OS has an access to it, et cetera, et cetera. Many other things which we’ll need to deal with. [00:20:30] Now, the question is, how do we actually generate a bit of machine code? And the answer is you can just compile it. So you can write some C++ or whatever language or assembly language, compile it, and then you get the machine code. Or you can go to some webpage like this one mentioned on the screen and see how it compiles your mov eax, 123 instruction to some actual array of bytes. And this [00:21:00] is what we are going to use.

So we will take those things and we’ll start generating the machine code. And there are actually two ways we’ll do that. We’ll see two examples for doing so. The first example is once we have the array of bytes, we need to have a physical handle, which we can use to call that array, right? We need to have some pointers, some delegates, some managed thingy, which we can use to call the method from C#. And there are two ways to get such a thing. [00:21:30] One of them is method called GetDelegateForFunctionPointer, which is in your marshal class. What it does is, it’s meant to be used with interrupt scenarios. So whenever we do have the interrupt, like we would like to pass things from .NET to C++ or the other way around, you can use this method to get the pointer to some actual code to be executed.

And marshal does that. It converts the method to some specific delegate of some specific calling convention. So you [00:22:00] need to adhere to all those things. However, this is something we’ll see in a sec to understand how it works internally. The other technique we can use is we will get the C# coded directly, and then we’ll modify it in place to modify the machine code of the method, to jump from here to there and to modify the execution. How we are going between methods.

So let’s see all those things in action. So we’ll have the [00:22:30] two examples for today. The first one will be using the marshal, get function pointer. And the other thing is going to be the jump instruction. Okay. So let’s now switch to Visual Studio. And I’m sorry for that, because apparently Zoom is capturing my mouse, so I cannot use it freely, which is a bit funny and surprising.

Okay. So here we are. So the first thing [00:23:00] we are going to use is we will use the ByteToFunc_Marshal thingy. So what we’ll do is we need to first set it as a startup project, and then we’ll see what we are doing, what we are going to do with this trick. So, okay. There we go. And what we are going to do is, let’s skip on over those things. What we first do is we would like to have actually two examples. First of them is the ActionTest. [00:23:30] And the other example is the FuncTest. So we’ll test actually two different delegates. So we need to have two delegate helpers, one of them being action of integer. So this is just a generic action of int thingy, only written this way. So we have a better compile time safety and the compiler takes care of all the stuff around.

And the other is just function of int returning int. So what we want to do is, let’s start with FuncTest first. What we want to do is we want to [00:24:00] implement a very simple function, which accepts just one integer parameter, increases it by four and then returns the value. So you can see the machine code for the method being actually here. So this is the machine code, not the assembly code, not the assembly language, but the actual machine code, which we execute. And we would like to code this method in this way, right? So we would like to get the delegate, which we can just use in C#. Now, how do we do that? We need to have this [00:24:30] generate function, which helps us doing that. So the first thing we do is we get the array of bytes which we would like to execute.

Then we need to get the address of bytes in that array. So what we do first is within this array in memory. So it’s not being used by the garbage collector, because as you probably recall, the garbage collector can take your objects and shuffle them around in memory. Can make sure that objects are [00:25:00] compact. That there is no fragmentation. So no holes in between the objects and all that stuff happens directly by garbage collector. So we can ask it to not do that anymore.

And in order to do that, we can team those objects. So we need to first create the GC handle. We can ask it to be pinned. We can then use it. And finally, we can get the Marshal.ReadIntPtr function to get the actual pointer pointing to that first [00:25:30] byte of the structure of this object.

And if we take a look how array of bytes is implemented under the hood before the actual data, it stores two integers, specifying first, the type of the objects. So that this is an array of bytes and also the length of this array. So that’s why we skip by eight. We increase this address by eight. What we need to do next is… What we need to do is we need to unlock the page. [00:26:00] So it’s being executable. So the CPU does not complain that, “Hey, those things cannot be executed.” And finally, what we do is we use, in line 65, we GetDelegateForFunctionPointer and just returns it. So this is what we do. The same actually happens for the other example we have. This time we’ll be running an action, which not only does something, but also calls some other method.

So we will have the MyWriteLine method, which we would like [00:26:30] to call and execute from within that machine code. First thing we need to do is we take the method handle of the, MyWriteLine method as you can see, then we JIT compile this method, get the pointer pointing to that method. And ultimately we generate a bit of a machine code, which pushes the address of the method we would like to execute. And then just returns to this address. So we push it to the stack [00:27:00] and then do the return thingy, which ultimately, what it does is it jumps to the method we would like to execute. So this is the idea. This is the plan for what we would like to do. So let’s run those thingies.

And as we can see that we were executing those two methods. So you can see that, hey, we are actually in MyWriteLine. As you can see, and we do have the parameter five being printed out. But also when we do call the function [00:27:30] method, which wants to take an integer parameter and increase it by four, you can see that, hey, we do get 27 printed out as expected. So this is the first trick of how we can generate some piece of machine code in .NET. The other trick is actually very similar.

What it does on the other hand is instead of using the get function, get function pointer using marshal, what we are going to do is we will jump around the code base. [00:28:00] So this time our action integer, the same delegate we had last time is going to inherit from some base stub class. And the base stub class is a very simple, very straightforward type, which holds just one integer. One integer named target, which is, and we’ll use it to point to the actual method we want to execute.

So having this field, we can actually inherit from base stub and create an action [00:28:30] int which has the method called stub, which accepts the parameter we wish. So it just accepts the integer and returns void. That’s how ActionInt is going to work. And the other class, the other delegate we have is the FuncInt which is just a func of and integer and integer. So it accepts one integer and then returns the value. The other code of the example stays almost the same. So we do have the same machine code calling my MyWriteLine and the same machine code actually [00:29:00] adding four to the number we pass to it. What changes now is the way we get the pointer, the delegate to execute it. So we start in a very similar manner. We get the array of bytes we would like to run.

We pin it, we get the pointer, we move by eight bytes to skip the first two integers. We unlock the page. And then what we do next is we create a new instance of something inheriting from the base stub. And we provide the target. We set it to the [00:29:30] address pointing to the bytes from within the array of bites we would like to run. What we do next is we create the delegate as you can see. We create the delegate in line 75. And once we have this delegate, what we do is we get the method stub to be executed. So we get the stub method from the delegate we would like to use. So you can see, we will be getting this stub method from the ActionInt. And once we have [00:30:00] this method, we modify it. So we get the function pointer of this method in line 79.

So we get the pointer here and we would like to get the machine code of that method and hack it. So we modify it in here to replace first bytes of this code, which is generated and JIT compiled to actually do this very nice machine code trick. So we get the first field of the instance on which we are executing, and [00:30:30] then we push it to the stack and return. So we effectively jump to this address. So we get the first field of this class, which is the target, and then we jump to it. So instead of getting the pointer directly pointing to the bytes we’d like to use. We first call the stub method. And then from the stub method, we jump somewhere else. So as you can see, this is the trick we would like to do, and we can try it out. So we can start it and to observe one more super [00:31:00] interesting thing here.

So first notice that, hey, we do print the value five. We do print value 27. So it seems to be working okay, but also in this MyWriteLine, what we do apart from printing where we are… Or sorry, or printing the parameter. The other thing which we are doing is in line 116, we are printing our type of what have been executing. And you can see that even though MyWriteLine is in class program, the GetType returns [00:31:30] as the ByteToFunc.ActionInt. Why is that?

That is because when we get the handle here, the function which we would like to call, I told you that first thing, what we do is we call the stub method. So we called this method. And then because we modified the Console.WriteLine here, what we are doing is we jump from this place, from line 96, we jump to the custom, MyWriteLine, which is in line 115.

So because [00:32:00] we are jumping on a very low level, on a machine code level, the .NET platform actually does not recognize that, “Hey, we did switch gears and we are in different class now.” Because it’s just a bunch of bytes. And it’s just a piece of methods, piece of bytes we can execute. So that’s why, whenever we call this GetType, we are getting ByteToFunc.ActionInt instead of the program, MyWriteLine. So that would be it for the tricks [00:32:30] with jumping and generating machine code. What we can do next is we can start hijacking methods. So what we do now is first we generated a piece of machine code. Now what we would like to do is we would like to get some existing method in .NET and ask it to do something else.

And the trick for doing that is very similar to what we have just seen. So what we are going to do is we’ll use two different tricks of how we [00:33:00] can do that. The first trick to call different methods. So let’s see what we are going to do is. We do have a test class and we do have couple of methods here like ReturnString, ReturnStringHijacked, some properties, whatever, whatever. So we do create… Sorry, we do call a return string static method from the test class. So you can see that return string is static string returning just the original string. And then first after we call it, we then try to hijack it. [00:33:30] So then whenever we call return string, it’s not return string what is being called, but return string hijacked instead. So this is what we are going to do. And the way we do it is we can actually modify the metadata.

That’s one trick. And the other trick is we can jump. So let’s see first that it works. So let’s actually start this application. And what we should see is that, hey, first when we call the [00:34:00] return string, we get the original string. But then when we call the return string, after hacking it, we get the modified string. So how does it work? In this example, what we do is we get the method handle, method descriptor of the method we are executing. So we are calling HijackMethod with the ReturnString method, and we get the metadata for it. The same metadata used for reflection, the same metadata you use when you call, GetType, GetMethod, GetConstruct or [00:34:30] whatever. And we do take those handles. We compile them. And then what we do is we understand, or actually we use the fact that the actual pointer pointing to the physical machine code of the method is eight bytes from the beginning of the method handle.

So we get the source address of the method handle. We move by eight bytes, and then we modify the pointer to point to some other method. This way, what [00:35:00] we actually do is when we call the return string next time we still call this method, but it points to the machine code of this method. So now actually two different methods, both ReturnString and ReturnStringHijacked, they do point to the exact same machine code, which brings or returns just the modified string. So that’s the first trick we can use.

And the other trick is very similar to what we have already seen. [00:35:30] Instead of modifying the metadata. What we can do is we can modify the machine code itself. So we get the address of the method handle of the source method. We unlock the page and then we get the address of the target method.

And then we just jump from the original method to the target method, right? So what we do is we can jump from the original method, by [00:36:00] using this long jump instruction, which you can see again, how we generate a piece of machine code in here. So we generate this machine code, we put it in the method and this is how it works. So this time when we first call the return string, we call the method non modified, the way it was compiled by the C# compiler and then just-in time-compiler. However, the next thing we call this method, we do have not this return original string thingy here, but we do have [00:36:30] a jump instruction from here to this place. So we execute this method and we jump from one place to another. So this is how this example works. Now the question comes, okay, do I ever need those things?

And now I’m going to show you a couple of things. So when do we want to use a thing like this? So the first thing I’m going to show you, or actually I cannot show it to you because I’m using Zoom and Zoom will not [00:37:00] allow me to change desktops here so we would lose the stream. So I would just explain it in theory, how it works. So whenever you do have a multiple desktops in your application or sorry, in your operating system, you’d like to run an application. And by default it runs on the desktop you are currently on. So the question comes, can we run this application on some other desktop? And believe [00:37:30] it or not multiple desktops were in… Sorry, were in operating system, in Windows operating system for 30 years now, has been since the Windows 3.11 or whatever.

So very, very old thingy. The thing is, in order to run the process, we would like to specify on which desktop we run the application. So we would like to run the Notepad on one desktop this way. But also we would like to be able to specify [00:38:00] on which desktop it is being executed. Now comes the think, can we do this in .NET? And if we go to the source code of .NET libraries, you can see the process API, which creates this startup infrastructure, which you use to provide parameters of how you run the process. You can see that it does have the lpDesktop name. So the pointer to the desktop name you would like to use for the application. However, it’s always [00:38:30] being set to zero. It’s always nullified. So you can never actually override this thingy and you cannot provide the desktop name you would like to use to start your application.

The question comes, how can we modify that? And there are multiple things which can come to your mind. First idea is, instead of just running this process API, the .NET API, why not copy it on the side because we have the access to the source code. [00:39:00] We have access to both do .NET Core and .NET Framework. We can get this code, copy it, and then modify the way we wish. It would work. The problem is with this approach, the problem is that, hey, if they now change the process API, process classes, for whatever reason, we need to apply those changes to our source code, right? So we need to keep those two repositories basically in sync. The other approach could be, “Okay, let’s not use the process API [00:39:30] at all. Let’s go directly to the Windows API libraries and code, create process X or whatever the method from the operating system.”

It would work again, but the problem is now we lose the whole power of .NET wrappers, of .NET APIs. So we cannot now use C# classes for managing processes, getting command line, changing priorities, whatever. We just can’t do that. So what can we do in instead? Well, what we can do is we can modify the [00:40:00] source code. If we go to the startup infrastructure, which is… Sorry, the startup info class, which is very internal thinking, you can see that it does have the constructor here and this construct or this startup info is being created when you call the method, start with create process. So more or less when you do process.start. So we do process.start Do start and then .NET creates the instance of the startup inform, calls its constructor. [00:40:30] And here we can see lpDesktop is always zeroed out. What can we do about that? The trick is we can modify this constructor.

We can hijack it and modify the machine code of this method. How do we do that? Well, first thing is we prepare the desktop name. We prepare the string of the desktop name, which will be able to be consumed by the Win API. Then we scan all the assemblies with reflection and we get the [00:41:00] type which is called startup info. We get its constructor and ultimately we hijack this constructor with our new constructor thingy. So now what happens is when we call, when .NET calls this constructor when creating the startup info, it’s not this line of code, which is being executed, but instead this code is being triggered.

And what we do here is first we set the cb variable of the class, the same [00:41:30] way the constructor does it. And then we just set the lpDesktop to the value we would like to have.

So we can see that just by modifying the machine code under the hood, we can modify the behavior of our application. The other example of what we could do is we could modify the way threats are created. So let’s actually see this in action. Let’s see what we are doing here. So if you do know the threading in . [00:42:00] NET, you probably do understand that whenever you create a thread, which just throws the unhandled exception, when this exception is being propagated, it cues your application. You cannot deal with that at all. You cannot stop your process from terminating.

So what is happening here is this exception is going to kill you. The question is, can we help that? Can we, in some way, avoid this process from being terminated? And the answer is, yes. We can [00:42:30] get the constructor of the thread and drop it with some nice helper method, which will just wrap the original lambda with the try-catch block.

So how do we do that? Again, we use the magic of the low level code. So we get the thread, we get the constructor which accepts just one thread start parameter. We compile those methods. And then we start modifying them on a very low level. So instead of calling this original [00:43:00] threat constructor block, what happens is we just call our modified block and you can see that the exception was being thrown, but it was handled. And the application did not crash. So this is what we can do. This is how we can deal with all those things.

And we can modify the machine code under the hood to actually do some very, very nice things. Okay. So that would be it when it comes to my examples. [00:43:30] And now over to you Gael, to provide some other thing, which is worth of listening.

Gael: Yes. Thank you, Adam. This is absolutely fascinating. And when I’ve seen this demo with hijacking methods, I really wanted to share that to our audience. I would like you to say, well, we’ve with been doing not hijacking, but we’ve been doing [00:44:00] interceptions before with PostSharp. Actually, you mentioned that we have… So we can do interceptions at three levels.

Actually what you are doing is you’ve shown how to do that at the level of machine code from inside the CLR, actually a little known fact is that in PostSharp we’ve been using the… Well, we are still using the library mhook [00:44:30] to disable the whole validation of strong name keys for the PostSharp process. So how do we do that? We hook the regulatory API. We simulate that there is a setting that sets up that removes strong name signing.

So we have a real use case for that. So, PostSharp itself is the technology [00:45:00] that allows you to replace a method body by another method body. So we’ve built a complete product based on that. And today I would lik to introduce a product we’ve been working on for a year and a half, and it is called metalama. And instead of doing this interception or replacing the method body at the level of is [inaudible 00:45:28] code as in PostSharp, we actually do [00:45:30] that at the level of source code by hacking Roslyn.

So I will just play a very short demo, two minutes. To show what metalama is about. Metalama used to be named Project Caravela and is going to be released under the name metalama in a couple of weeks. Let’s play the demo.

With metalama, you can encapsulate repetitive logic into a special class, named an aspect. [00:46:00] You could do that with PostSharp, but metalama is different. In metalama, an aspect is like a code template that is applied to your source code during compilation and generates other source that is then executed. Look at the log attribute class. The code that is grayed out executes at compile time and the rest of the code executes at trend time.

[00:46:30] The call to meta.proceed means that the original method code should be invoked here. To apply the template to a method at the aspect as a custom attribute. To preview the code that will be executed enter the diff [inaudible 00:46:57] feature. The [inaudible 00:46:57] view compares the source code with the transform [00:47:00] code, where the template has been applied to your source code.

We can now execute the program. As expected, this program includes instructions that come from the template and instructions that come from the source code. If you want to debug the transformer code instead of the source code, select the LamaDebug configuration and step into. [00:47:30] As you can see, we are stepping into the transform code. That’s all I have for today. You will hear more about metalama in a couple of weeks. With… Thank you. Thank you. Let’s continue with the webinar, Adam.

Adam: Okay. Thank you for that. Let me [00:48:00] take over the screen. Hopefully it works again. Yes. And we are back. So we have seen quite multiple examples of what we can do with methods, whether we can run them or modify them from C# or from the low level machine code. What we are going to do now is we are going to play with types just a bit. So we are going to modify the type system. We are going to abuse it a bit.

So [00:48:30] let’s see what we have. So the first thing about the type system in .NET or in the platform form in general is that it is being verified during the compilation time. And also when we load the types, but once they are loaded, the platform doesn’t care anymore. Does not verify things, does not check whether the assumptions are still held, meaning that the assumptions [00:49:00] that the method accepts, these parameters return, that parameter those values, et cetera. Those things are not later verified, meaning that as we have already seen, we can modify the machine code of a method, even though after we compile it, even though it’s being tested or checked by the compiler before. But afterwards, no one cares.

Compiler does not reverify those things. Again, the same happens with the type system. Type system or the classes, inheritance and all that stuff. [00:49:30] They are checked. They are tested during the compilation time. Meaning if you try to do something, which should be not allowed by the language itself, the compiler will stop you. And not emit the code doing that. However, what we can do instead is we can modify the things internally to abuse the type system, because at the very end instance of the type, it is just a bunch of bytes. It doesn’t have anything to it which would make it magically type [00:50:00] safe.

It’s only that we make sure or try to make sure that the code we run against those instances is valid and does not abuse the type system. As we’ll see, we can modify that. Also, the important thing about the types or the instances is that all the types are generally like bags of data.

The instance itself has no methods associated to it, right? It’s not like in JavaScript that that type or the instance of a type [00:50:30] has pointers to the methods it’s being executed. No. In .NET, all the instances of given type they do share their methods. We can see as we have already seen with the method handles that the pointer is stored in one place in the metadata, which is accessible via reflection or used by the .NET internally. So the methods are not carried with the instance of the type. What is carried though is the data, all the variables, all the fields, which are [00:51:00] stored with the object.

Okay, let’s see what we can do. So the first example which we are going to see is we’ll just play with two unrelated types to see if we can hug them a bit. So we have a class A, which has this virtual void method print, and it prints that, “Hey, this is A.Print and I am this.GetType”

So we would expect A being printed out here. And also we have class B, which has exactly the same [00:51:30] method, virtual void print with no parameters. However, this method is not related to this print A, right? Those classes are not inheriting from each other. They do not have common parent apart from system.object. They have nothing else in common. So this is just a pure coincidence that those methods have exactly the same parameters and exactly the same name. So what we would like to do is we would like to get an instance of class A and call [00:52:00] method B.Print for that instance. How do we do that? Well, again, we can hijack methods. So we create an instance of class A and then call print with A and PrintWithA accepts just one parameter A, calls printing with A, and then calls a.Print.

So the method from the very top here, but we also have a very similar method print with B, which also prints PrintWithB and calls two methods from the B type print and [00:52:30] Print2. And what we do is we just hijack those methods. Sorry, we modify the print with A two code PrintWithB. Okay. So what we expect to see, and let’s actually see this in action and this application should crash. And we’ll understand in a sec why it does. However, the first thing we see is that even though we have an instance of type A and we call PrintWithA what we actually see is that [00:53:00] it’s printing with B, which has been executed in here, right? You can see Printing with B being executed, and then we try calling print b.Print. However, what happened is we executed a.Print and executed abusing type system.A.

Why is that? That is because even though we did call this method with instance of B as a parameter, actually under the hood, there was [00:53:30] this instance of class A being passed. So we now try calling method print on class B, which happens to be in exactly the same location as print in class A. So that is why we get different method being executed. Because what we try to do next is we try calling Print2, and you can see that when we tried calling b.Print2, we get a very nasty access violation exception.

[00:54:00] Why is that? That is because we tried calling the method, which is virtual. So the virtual method needs to be executed in runtime, need to be executed in a polymorphic manner, meaning that this Print2 was actually checked against the instance of the type we are executing. But hey, there is no Print2 in instance of class A so .NET just thrown a terrible exception that everything just crashed. Because we tried calling God knows what. However, if we do change this Print2, [00:54:30] if we do change it to being a non-virtual… Sorry, non-virtual method, but just the regular one.

And if we try calling this thing now, what we should observe is that Print2 is being executed properly. That is because Print2 now does not need to go through the callvirt instruction. So you can see that even though we are calling B.Print2 against instance of type [00:55:00] A, it still works just because we switched from using a virtual method to non-virtual one, we were able to call some code of completely unrelated type of a method of a completely unrelated type on the instance of a different thingy. And how can this be useful for us at all? We can actually start toying with it and do two fancy examples.

First thing is we will try serializing [00:55:30] a non serializable type. Okay? So let’s see what we have here. We have a class root, which is marked as serialize, right?

And it has some nice field, but also has non serializable child. Okay? So what is non serializable child is, is just another class with yet another field, but it does not have the attribute. Right? There is no serializable in here. So what happens is if we try serializing [00:56:00] instance of the root class, we’ll get an exception that, “Hey, child is not serializable.” What can we do instead? What we can try with is we can get the instance of this class and replace it with similar class, or actually a very similar thingy, which has exactly the same field, exactly the same method. Meaning the same structure, the same scheme and the memory. It’s actually reserved and stored [00:56:30] in the same manner when it comes to the actual order of bytes. However, this thingy has a serializable attribute on it, meaning that we can serialize the instance of serializable child.

What we do next is we create a hierarchy. We create the root object with one non serializable child. And now if we try to serialize this, we would get the exception. “Hey, root object is not serializable because the child cannot be serialized.” So we [00:57:00] do create an instance of non serializable child on the side. Ideally we copy exactly all the fields, one by one with reflection or whatever else. And then what we do is we would like to modify the instance. So we create a pointer to the root object here, and we would like to get a pointer to this non serializable child. So we get the pointer to the non serializable child… Sorry. To the serializable child. And [00:57:30] then we modify the pointer in place. So we get the handle to the object. We move by a specific amount of bytes, and then the size where the field is being stored.

And then we place the new child over there. And now when we try running this thingy and we are actually… When we try, sorry. When we try executing this thing, what we’ll see and actually printing into the stream, what we will see is that this thingy should work correctly. [00:58:00] Should not fail at all. Why is that? That is because when we execute this part, the objects should… Oh, sorry. I run it with debugger, which is not what I wanted. I want to run it without the debugger. What we should get here is you can see that, hey, before changing the children, it was non serializable that we hacked it. And we do have serializable child. And this time it printed exactly the same values, but when we tried serializing it, it did not crash, it [00:58:30] worked properly. So you can see just by toying on this level, we can replace the type, replace the instance and everything works correctly.

Moving on. We can actually do even more magical thingies. We can implement a multiple inheritance in C# or actually something which is going to pretend like we are inheriting from multiple classes. Let’s say that we do have multiple bases. So we would like to have a true multiple [00:59:00] inheritance in C#. We do have base one with one integer field and print integer method. Okay. Then we have base two with float field and PrintFloat. Then we have base three with two short fields and print field method. Then we have base four with one string field and PrintString. What we would like to do is we would like to create a class which inherits from base one and base two and base three and base four. How do we do that? We’ll actually be [00:59:30] swapping the contents of our object and morph it, depending on the use case we have. Because we cannot, on the .NET intermediate language, we cannot implement a true multiple inheritance in that place.

However, when we are dealing with the object, we actually need it to be an instance of one single class, not four of them, right? We’ll be dealing with just an instance of one single class, but we’ll be changing, switching [01:00:00] what the parent is for that instance. So we’ll have an interface called multiple base, which will have a very nice dictionary for which we’ll be holding the state. So we’ll have a dictionary holding all the possible fields for this type, meaning that there will be this integer field, float field, two short fields, and one string field, which are in all those base classes together. And we also store the type which was the type which we are currently… [01:00:30] Like the instance is pretending to be. So we have this CurrentState thingy and what we need to do now is we need to create a new class, which inherits from the specific base and implements this interface.

Okay? So that we do have those things wrapped together. Carrying on, what we do is we create an instance of FakeChild1 which inherits from base one, but is also an instance of this multiple base. Where we do specify that we would like to be [01:01:00] instances of base one, base two, base three, and base four. What we do next is we morph this child from FakeChild1 to Base1. Meaning that now this thingy becomes an instance of Base1, and you can see that we can assign to the integer field and call the print integer method, right? This is the regular variable of the Base1 type. Next, we modify the same child to represent [01:01:30] Base2. So you can see that this time we do have a float field and print float method. With base three, we have two short fields. With base four, we actually have the integer, right?

And then we can go back. We can morph to base three, base two, base one, starts printing those things. So we can see that all those things are executed properly, correctly as we would like them to be. So this is the option of the application. Now, how does this work? Okay. You can see that we assign 123 here, [01:02:00] 456, some other integer, some abracadabra, all those things. How does it work? The crucial part is in this morph method. So what we do is we get the state of the object. And if there is a need, we change the type. We store the values of the fields in the dictionary. So they can be later reused, so they can be preserved. But ultimately what we do is we change the type. How do we do that? Because each type, each instance has a [01:02:30] so-called… Just like method do have method handle, the same way objects, they have a thing which is called type handle.

And this type handle is a bit of metadata, which is used with reflection and all the stuff, which specifies that, “Hey, I am an instance of string. I am an instance of object. I am an instance of whatever.” So we get this type handle. And what we do is we modify this. So we get the actual, the original type handle [01:03:00] and replace it with the new type we would like it to be. So we go to the object and when we are calling this morph thingy, we go to child, which is instance of fake child. And then we, directly, straight into its metadata. We write an in integer or write attack saying that, “Hey, you are not FakeChild1. You are not FakeChild2. You are Base1, Base2, Base3.” Whatever needed.

So this is how we can do it. And as you can see, it works. It has compiler [01:03:30] support and allows us to implement the multiple inheritance thingy. So to sum up, as you can see, as you have seen, by understanding all the things under the hood and all the mechanisms with machine code, with objects, with all the stuff, we can actually play with it a lot. We can modify objects, we can generate machine code, modify methods, hack things. Obviously you need to understand quite a lot. You need to understand calling conventions, [01:04:00] garbage collector. You need to understand the CPU, your architecture, how to generate machine code.

You need to understand Win API, permissions, all the stuff which happens under the hood, but ultimately once you get a grasp of all of that, it’s just a bunch of bytes which you can modify, which you can toy with. And this is what you can do once you learn all the things which are just below the C# language, but you can implement those examples without leaving the C# language, directly. This QR code points to [01:04:30] the slide deck. If you would like to download the materials there, you can find them over there and now let’s move on the Q&A session or over to you, Gael.

Gael: Yes. Thank you. That was so fascinating. And it really gives an insight about the runtime. Like the internals of the CLR. Before we go to Q&A, I would like to introduce [01:05:00] the next webinar. So sharing this slide. Yes. So the next webinar, next month will be about Roslyn source generators. Stefan Polz will explain you and how and why never send a human to do a machine’s job. Stephan Polz is a clean C# sharp coder. He’s a speaker and an open source contributor. So see you the same [01:05:30] time on February 23rd.

We are continuing with questions. And actually we have just one question from Abraham. And that was a question about hijacking methods and executing byte arrays. The question is, if you can set an arbitrary address in memory and [01:06:00] run whatever there is there, then what is stopping us from jumping to memory that is not allocated from all application and run that? Isn’t that a security risk?

Adam: Yeah, that’s a very good question. And the answer is, or should be pretty straightforward. Yes. Nothing stops us from jumping to whatever others we have. We can do whatever we wish. We can execute any code, [01:06:30] jump to any place in our application, do whatever we want. I could even make a very bad joke here. Like the code execution results in code execution, you can do whatever you want. Is that a security risk? Not necessarily. Not in this way I’ve been doing that. That is because it is me who controls the code you are running. However, if you are loading some external code, like from a plugin or from the Internet or whatever else, then the answer [01:07:00] is yes. You do have a security risk. You cannot control what the application will do, because once you give it permission to just trigger any code, any C# code, well, it can execute as we’ve seen any machine code you like. Is this the security risk for the operating system or other applications? No.

But security risk for the user? Yes, it is. You can for instance start calling Win API, implement keyloggers or whatever else. Yes. That’s the thingy. [01:07:30] However, if you do not load code from the external sources, then it’s the same risk as with just your application doing something nasty. The other question, actually, which is very interesting and we could ask it. Hey, so if the plugin we load can do whatever it likes, can we stop it somehow?

And the answer to that is only partially. Meaning that there is this code access security, or have been [01:08:00] in .NET Framework 2.0, was then abandoned. So there were some mechanisms to stop you, for instance, to not allow you to call the marshal functions or whatever. But generally, if you can execute any code, which is marked as unsafe, and by default, you should be able to, then in unsafe, you can get pointers and do whatever you wish. So there is not a single reliable way to stop the external [01:08:30] code from doing anything it wants. So generally… Well, it can do whatever magic you imagine.

Gael: Good. But Adam, I think there are two conditions. If I didn’t miss anything in your presentation. There are two conditions to execute the code. First, regarding this question, the code must be in the current process. And the second, the page must have the execute flag, right?

Adam: [01:09:00] That is correct. And those things do not stop us because you can inject code from some external process if you wish. There is actually my talk, which is called DLL injection. You can find it on YouTube where I show you how to do it. And also to make the page executable, you just call the Win API method, VirtualProtect. And this is what I was doing in those demos. You may have recalled, there was the line unlock page.

Gael: Yes.

Adam: Which was actually doing that. So [01:09:30] you do have permissions to do that and nothing will stop you from doing so.

Gael: Okay. Maybe a second question from myself since we have no question from the audience, is any of the tricks you’ve been showing used in production?

Adam: Yeah, I do use those tricks in production. For instance, the trick with desktops I’ve shown you, and this is actually I’m using in production. And [01:10:00] that is because… Well, I was running some interactive application like Puppeteer, like automated Chrome, or just automated browser, which was stealing focus a lot. Meaning that this just pops up on your screen, takes control over your keyboard, et cetera. Pretty annoying, I’m telling you. So what I wanted to do is I wanted to run it on some other desktop. And .NET did not allow me to do it cleanly. So I was running it with the hack I’ve shown you. So this was [01:10:30] in production, actually still is. The example with hijacking the thread constructor. So when there was some exception being thrown and we were wrapping this with try-catch. This is also something I was using in production.

You may ask why? “Hey, why can’t you just change this thread to handle the exceptions with try-catch by itself?” Right? And the answer is I was actually using some Java code at that point, because there is a thing which is called IKVM, which is Java to .NET compiler, [01:11:00] which allows you to run any Java or JVM code inside your .NET application with direct Java code, not translated to C# and compiled.

And in Java, when you do have a new thread and you throw an exception, it does not kill your application. So Java programmers do not necessarily care to handle all the unhandled exceptions because it’s safe or it’s harmless in their applications. So I was running some code, which was creating a thread in Java, [01:11:30] not handling the exception and then crushing my .NET app. So I had to interject those things this way. I was doing couple other things like catching stack overflow in my custom test runner, right?

When you do have a stack overflow exception, it kills your application. You can handle those things a bit. You can deal with them. So there were other examples I was playing with it. So generally the answer is, yes, those things are in production, but I generally do not recommend you to use them if there is [01:12:00] any other way. You need to understand quite a lot. And they tend to break over time when you update .NET and change major versions, those codes, they may just start throwing exceptions or something. Just because you rely on the internal representation of your methods. So you need to be very careful whether that pointer is actually eight bytes from the beginning of the structure, or maybe 12 bytes or whatever. So this is nasty when doing upgrades, but technically it [01:12:30] works for many years now in my production code.

Gael: Excellent. Thank you very much. This is the end of this webinar. It has been an amazing, fascinating webinar, Adam. Thank you very much for being with us today. On behalf of the team, this is Gael Fraiteur, and I’m saying goodbye for now and see you on the next PostSharp live webinar on February 23rd. Thank you very much.

Announcing Metalama: a modern Roslyn-based meta-programming framework

$
0
0

We’re proud and happy to announce the first preview release of Metalama, a modern Roslyn-based meta-programming framework for .NET that helps reducing boilerplate code and architecture erosion. Metalama is the .NET developer’s next best companion for boilerplate elimination, custom live templates & code fixes, and architecture validation.

To learn more about Metalama, visit our website, check the impressive list of features, skim the documentation, or play in the online sandbox.

GO TO METALAMA WEBSITE

Today is a good day to try. After a couple of early demonstrators, today’s release is the first mature one and we consider it to be an almost production-ready product except for a few features and testing. If you have a small, fresh project, it’s a good time to try Metalama to see what it’s capable of.

Metalama is the successor of PostSharp. Metalama surpasses PostSharp in many regards including design-time experience, ease of use, platform support, performance, and debuggability. However, there are still feature gaps. It is not yet a good time to perform the big migration on your own. No worry! We will still be supporting PostSharp until Metalama will become a complete replacement, and even some time after this milestone.

We will provide caviar support to early adopters. If you’re using PostSharp and are excited to move to Metalama, we would like to hear from you. According to Metalama’s readiness and our own capacities, we will do all we can to early adopters to port their code to Metalama. Any question now? Join us on Slack for anything.

You can use Metalama for free. Metalama is free during preview for everyone, but that’s not all. You can continue with the free Metalama Essentials which includes a broad set of features without limitation of project size or company size. We will also offer free licenses of our unlimited product for open-source projects, students & classrooms, MVPs, other influencers.

Webinar: Source Generators with Stefan Pölz

$
0
0

Stefan Pölz explains source generators: how they are currently used in .NET, how you can build your own, test them, and make them fast. We’re publishing the recording and the transcript of the webinar that went live on February 23rd.

Introduction

Gael (00:00:03)

Okay, we will start. Welcome everybody to this webinar. My name is Gael Fraiteur, I’m the President of PostSharp and the lead developer of PostSharp and of Metalama. And today, we have a webinar about a topic I love, and this is Roslyn source generators and a new feature, maybe one year old of Roslyn and Visual Studio that allows you to generate code. But Stefan will tell us more about this. Stefan Polz or FlashOver. Why FlashOver, Stephan?

Stefan (00:00:59)

Well, this has been my alter ego since quite some time.

Gael (00:01:02)

Okay.

Stefan (00:01:02)

I used to be a firefighter. This is some years ago, and this a firefighter term and that stuck.

Gael (00:01:09)

Okay. Stefan, you are a senior developer at ADMIRAL Sportwetten in Vienna. You are interested and passioned by clean code, especially in C#, test driven development. You like hacking with Rosyln, meta-programming, source generators. And recently, you spoke at NDC Oslo and many user groups. Before I give the floor to Stefan, the webinar will last about one hour. If you have questions, please use the Q&A facility in Zoom and we will answer the questions at the end. I will repeat the questions, select the questions. Stefan, I think we can start.

Stefan (00:02:11)

Cool. So I’ll take over the screenshare?

Gael (00:02:31)

Yes.

Source Generators

Stefan (00:02:32)

Okay. [inaudible 00:02:33] thank you very much for the introduction, Gael. And thank you much for having me. I’m super, super excited to be here. I have obsessed myself with Rosyln in general and with source generators in specific in the last couple of years. So I’m really welcoming the chance to share almost everything that I know about them. So Gael already introduced me perfectly, so I will skip over that. Let’s have a look what we are going to inspect today.

Stefan (00:03:05)

We have a look at what the source generators are as example. So source generators have been introduced with .NET 5.0. And now since .NET 6.0, the .NET team has actually built source generators themself and shipped them with the latest .NET 6.0 SDK, and this will be expanded in .NET 7.0 already. There’s something in preview already released. We will see what has been added to C# 9.0, which facilitates source generators. Then we are going to have a look at what is the power of source generators, but also what are their restrictions, how they look like.

Stefan (00:03:51)

A lot about tooling, how we can get more familiar with Roslyn in general, source generator in specific, how to debug them. How to apply more recent C# features like nullable reference types, .NET standard 2.0 libraries, some of the restrictions we will hear all about that, different versions. Then we see a bit of a performance issue with source generators that we will then fix as well. Have a look at how we could publish our own source generators via NuGet as well. And at the end, there is some more examples and the Q&A section, and also a little bit of a surprise from Gael. I’m really looking forward to that. So let’s get started by example.

Stefan (00:04:42)

In .NET 6.0, which has been released in November of last year, the team actually built in suite generators themself. This one, which facilitates System.Text.Json; another one, which is simplifying the use of logging and actually also the Razor C# file are generated with [inaudible 00:05:15] by now. So, let’s jump in right into code.

Stefan (00:05:18)

The first example I would like to share is… Let’s zoom in, okay, is logging. So here we have a simple usage about we are creating a logger with the minimum log level of TRACE. And we print Hello, World to the console. Let’s see if it’s actually working, and it does. Hello, World is printed. Now we can also parameterize this logging. For example, let’s add name here and let’s add this now to the parameter, so put world here. And this code now still produces the very same output. However, now, there’s a little bit of a caveat that we need to be aware of.

Stefan (00:06:16)

Let me also show this by not passing a string here, which is reference type, but actually passing an integer here, which is a value type. I will enter 0x_F0, which is basically 240. So let’s see if we get hello, 240 in the console. We do indeed. But there is something to be aware of here, we basically caused a little bit of a performance hit because when I hover this log information, have a look at the definition of this extension method. We see that it takes a params of object array.

Stefan (00:06:56)

Let’s have a look at this in SharpLab, this will depict the problem. So I have here this method, I do a params object array, those are my args. Let’s create a caller, public, void, caller. And this caller now calls into the method M with 1, 2, 3. But what is actually going on under the hood now? We see this on the right side. Here we create now, we actually instantiate this object array, the params is just under air quotes, just a C# feature. This is then a compiler, something which can actually be run. And we are more or less hidden creating here an object array and setting these.

Stefan (00:07:50)

Now, since this is an object array, and we are assigning value types, the integers, those extracts, we also have boxing involved. So here we have some hidden locations. Now to improve that… Sometime ago with Microsoft extensions logging, there is also the concept of high performance logging. This is achieved via the LoggerMessage.Define method. So let’s build a helper class. I call it Log. And in here, I now can define this message. So I have my static action, to this action, action, where we first passing the logger.

Stefan (00:08:51)

Then we pass in the parameters that we have, for example, an integer, and there is always an optional exception as well, mark it as nullable. I call this hello. Now we must say LoggerMessage.Define. And we define this so this is one parameter, the integer, and we can now define the log level, I do LogLevel.Information. And let’s say, I want to pass in this 0x_F0 here, which is 240. And now the format of this message I just paste it in here. Now maybe I want to call it to be an extension method. So I make a public static void. I call it, “Hello” and invoke this delegate that we’ve built here. So this is an extension method. So I say this ILogger, and we invoke that with of course… We also want to pass in the parameter, so this is my integer, I call this the number and passing in as well.

Stefan (00:10:25)

By the way this here, this is the event ID, so every log message also has an event ID, And we don’t have an exception so we can pass in now. Now we can invoke this extension method so we can [inaudible 00:10:43] Logger.Hello and pass in our number. Let’s use a different number, let’s use nine. So we now should have the same result. This time it should not print Hello(9), let’s actually confirm that. Indeed, we have Hello(9). So it’s doing the same, but we got rid of both the object array allocation and the potential boxing for this params array.

Stefan (00:11:16)

But this is now quite some code that we always have to write in order to have this performance improvement. So now comes one use case of a source generator. So there is two prime use cases. One of them is to reduce the implementation of repetitive code. Since this high performance logging pattern is always the same, I need to define my message and then actually invoke it. This could be done via a source generator. So the team has introduced source generator for that. I will grab that example. I will replace that. So we now have our log message here, and that basically… Let’s change this to an integer. [inaudible 00:12:11] and we define that, basically we control that via this Logger.Message attribute.

Stefan (00:12:26)

So I still do the very same thing. I have the 240 as an event ID, the LogLevel is still information and it’s still the same format of the string. And let’s see if we invoke that if we indeed get the same result. And we do, we still get Hello(9) but with much less code. And this is now actually generated by source generator, but where is this code? Currently have a back in my tooling you actually get a compiler error in the ID, but actually, we saw it running so it works. But where is this code? I can actually see that when we go into the solution we have, unless tooling doesn’t show… Well if my tooling doesn’t show this, but perhaps I can view the build output. Let me quickly grab that, that’s not it. There we have it. So this is what’s actually generated by a source generator. We see this is auto generated and it basically does the very same thing it defines this action via the LogMessage.Define Method. And also wraps this in an if logger enabled, which is always smart to do always wrap your log message if it is enabled, because if we set up a higher minimum low level than the message is, then we don’t want to point it in the first place and we can avoid the invocation. And yeah, so this is one use case of source generators to eliminate repetitive code.

Stefan (00:14:10)

Another use case is to increase performance. For example, we have System.Text.Json, so Json serialization and deserialization. I will copy in two pieces of code here and here. So what we are doing is with .NET 6.0, new methods have been added to the Json serialized and deserialized methods, which do take this JsonTypeInfo object. And this can also be generated by a source generator. So we define a partial method here… Oh, sorry, a partial class, which derives from JsonSerializerContext. We tell the generator for which type it should be created and can also pass in some options, such as that the output should be written indented. Here, we define the entity that we want to serialize and deserialize and then we do a little launch trip. So we create the entity, we serialize it to Json, print it to the console, and then we deserialize it again. Again, print it to the console.

Stefan (00:15:32)

And let’s see what’s the output. And we do get… Well, I do have a fail here because I put the log level two error. So we see that the first white, this indented deserialized entity, and then again, the serialized [inaudible 00:15:53]. So this is the record two string that we see down here. And so the source generator now here generates a lot of code. We can perhaps see that, not sure what’s wrong with my tooling, but actually there is a bunch of code. I’m just opening one file because there is many. So this is like one file where the serializer options are defined and this default deserialization context which carries this type. So both this default serialization context and this entity type they’re both generated, and there’s a lot of code. I will open another file and another file.

Stefan (00:16:44)

So this is all generated and the benefit now is that general or previous system text Json without a search generator needs to do reflection. So it goes at one time needs to figure out, okay, what’s the entity type. It has a string which is called name. It has a number which is called number and has to figure this out at one time. Well, they then cache the information. So it’s only the first invocation, which is still a bit more expensive, but then this cache requires memory that lingers around in our applications. And source generators now do a very direct and already at compiled time evaluated serialization and deserialization, which now avoids this startup cost so we have improved performance, but also reduce memory allocations. Those are those two very powerful use cases about source generators. So we can get rid of tedious patterns to implement, but also potentially increase performance by most likely reducing System.Reflection. So now we saw this partial keyword being used a lot, but this is actually C# 9.0 syntax because before C# 9.0, if I do… Actually let’s show that in visual studio, if I would, let’s get rid of that, create a public static, partial class, call it Helper. And in there a partial method, I have it here, I call this method Get. So, before C# 9.0 this wouldn’t have compiled. Well, I do get a compiler error here, but I’m not sure what’s wrong with the tooling, but this wouldn’t have compiled because the explicit accessibility keyboard was not allowed. Partial methods so far have always been implicitly private because if there is no implementation to it, then the compiler basically strips it from the compilation. So this is also why it couldn’t have return types. What wasn’t also allowed out parameters, haven’t been allowed in partial methods. Now with C# 9.0 they are allowed but if I now enter an explicit accessibility, such as internal, now the new rules, the C# 9.0 rules apply and they state that it is okay return something, out parameters are okay.

Stefan (00:19:47)

But now the implementation must be provided. If the implementation isn’t provided, they get a compiler error. Now the tooling actually fires this compiler error. Not sure what’s wrong with my setup it seems that it can’t find my source generator but it’s actually there because I wrote a source generator, which looks for this partial method and actually provides an implementation for it. And let me show that. So if I do via the logger… Or let’s just do on Console.WriteLine and I say Helper.Get. And if I run this, Console, dotnet run, we actually get printed, “From Generator”, but we haven’t written this. This is coming from a generator.

Stefan (00:20:37)

Let’s have a look at the generated file, looks like this. So this generator now creates the second part. Let me put this side by side, actually. So the generator creates the second part of this partial class and creates the implementation to this partial method. So this implementation with the usage of [inaudible 00:21:00] must be provided, but it may provided by a source generator.

Stefan (00:21:04)

Another C# 9.0 improvement is module initializers. If I add a bit of code here to this helper class, we now have the module initializer attribute, and this is called whenever the module initializes. Most of the times assembly and module, or at least in my mind, I’m not entirely sure what the difference between assembly and a module is, but basically when the assembly loads, then this one is initialized.

Stefan (00:21:35)

And I can improve that if I call Helper. this text property. We have no code which actually invokes this Init method here. We see this as 0 references, but still… I close the console. If we want this, we get this initialized string that we actually set here. We never invoked this method, but this is done with this new C# 9.0 features, module initializers. So our source generator could, or with module initializers we could ensure that something is initialized, that source generators require. And those two features now may be heavily consumed and used by source generators.

Stefan (00:22:53)

Now, what are the laws of source generation? A source generator gets as an input the compilation of the user. So basically the user code, all the code that I have in my solution, where I do include the source generator is basically put, well, not all of it, but most of it is put as an input to the source generator. For example, the C# code is passed to a syntax tree. This is very similar to other Roslyn extensions, such as analyzers or code fixes. As a matter of fact, a source generator, technically speaking is a Roslyn analyzer as well. So the programming model is very similar. So we get the syntax trees, we also get the semantic model that we can ask questions about is this identifier entity? What type is it? What members does it have?

Stefan (00:23:56)

One limitation is we can only add new sources to the compilation. And those new sources are actually added as strings, so the input is the syntax tree semantic model, additional files and so on. And the output is basically a string. And this string now gets put back to the compilation and will be part of the final compilation, which will then be put into the DLL if we build it or into the NuGet package, when we pack it. And we can only add new sources, a source generator cannot modify or remove existing code it can only add to it. So this is how this adds up with the partial basically extension point.

Stefan (00:24:44)

A source generator can also produce diagnostics because technically speaking a source generator is an analyzer. It could produce warnings, for example, if I misuse the source generator then we could produce warnings to indicate to the user and guide them how to use them correctly. We can also access additional files. For example, we could mark a text file as an additional file. In the source generator, we could have something like a transpiler, which then spits out C# code from this additional file, whatever it is. Maybe we could transform Json into some C# initialized dictionary, perhaps.

Stefan (00:25:33)

Source generators are un-ordered. So if you have multiple source generators, well, they do run in a particular order, but this is undocumented so this may change. Source generators cannot depend on each other so each source generator only sees the same input. They do not see each other’s output. And I already said that technically analyzes. And we went through the use cases, avoid boilerplate code, avoid Reflection for-

Stefan (00:26:00)

The use cases. Avoid boilerplate code, avoid reflection for better performance. In this presentation, everything you see is also available on GitHub. I put in documentations, also the cookbook which is from the .NET team, delivers some recipes on how to solve issues with source generators. Now what’s the anatomy of a source generator? How does it look like if we want to build one? The most important thing is… We need to include in our project, which has to be a .NET standard 2.0 project, because source generators also need to run in Visual Studio. That’s still based on .NET framework. So for compatibility reason, just as any other Roslyn plugins, such as code refactoring or diagnostic suppressor, they have to be published as standard 2.0 packages. And this project then needs to reference the Microsoft.CodeAnalysis C# workspaces. Maybe let me switch to… Red should be better visible. Yes. So this includes everything we need, to build a source generator. This has been added in version 3.8.0, which is basically .NET 5. So this is only compatible now with .NET 5 and higher. So source generators don’t work with .NET Core 3.1 or .NET Framework. We could consume it in a project which targets .NET Framework, but this project needs to be built with the .NET 5 SDK.

Stefan (00:27:57)

And additionally, I always like to add this set of analyzers. So this is basically an analyzer which analyzes the source generator that I built. So some sort of meta analyzer, because it analyzes my analyzers. Always add them as private as at all, because we don’t want these dependencies to transitively flow to the consumer. And then we create our generator. Our generator derives from IsourceGenerator. We need to define two methods. We have an initialized step where we basically set up the source generator and then the Execute step, which eventually could produce, it doesn’t have to, but could produce a C# string. And we need to put the generator attribute on top of the type so that the tooling, the driver, is aware of it. Additionally, we can define a SyntaxReceiver. Actually, let me show this in action.

Stefan (00:29:06)

So here I will briefly show another example of a source generator. Examples, yes. If you have seen the talk from David Fowler, Implementation details matter, he showed the example of how in on the to string is a little bit slower than potentially anticipated because in the background it does reflection. And this reflection or this cache reflection values are then looked up in a binary… Be a binary look at been in a way. And so this is not this not as fast as it can get. So one suggestion is to actually have a source generator, which creates a more naive approach, which is basically just a switch. Overall the members of that Enum, and then basically do a name off and print this out. And this is significantly faster than doing a regular Enum to String.

Stefan (00:30:20)

So how can we create this code? So here, everything is generated. This Enum info type is generated and all its members. Oops, excuse me. And all its members are generated as well. So we see again, the auto generated on top. When we generate a file always put the auto generated on top because this per default suppresses other analyzers to try to analyze our generated code because the user has no direct control over how the analyzer creates these. And we see in the tooling that Visual Studio shows this file is auto generated by that generator, you cannot edit it. Let’s jump to that Enum input generator.

Stefan (00:31:21)

We see it needs to write from my source generator and it has the generator to build on top. Now in the initialized context, this optional, we can register a SyntaxReceiver via the register for syntax notification method. And the SyntaxReceiver now is a type, it arrives from SyntaxReceiver, and it has one method. It has the on visit syntax node, and this gets in each and every syntax node in the compilation. So basically the user’s code. So just to give a bit of context… What’s a syntax node? I will again, jump to SharpLab

Stefan (00:32:11)

And exactly this code. How would this look like from a syntax three point of view? I can switch to the Syntax Tree. And now we see the Roslyn representation of that code. We see this [inaudible 00:32:24] compilation unit. This is now our tree. We have a using directive, which is here. We have our class declaration, which is this entire type. Now this class declaration, it has a public keyword. It has two method declarations, from where I jump in then. We will eventually find out that the first method declaration is this color method, which return type is a predefined type. This void keyboard… Actually, as a matter of fact, System.Void exists, it’s an unspeakable type, but it’s basically there.

Stefan (00:33:06)

And the second method is our other method, which has a parameter list. And in this parameter list, we have several parameters. One parameter has the params keyword and so on, so forth, we can keep going and expanding the tree. And each of these things that I can expand, so this is each node. And those nodes now get supplied to this SyntaxReceiver on visit syntax node. And now we can do filtering. So in the source of arena, I basically… What we want to achieve… If I jump back to this example, what we want to achieve is to find every invocation of the Enum info GetName method. So we find all the GetName methods, which are invoked to form parameters. They are candidate for what we actually want to generate source code for.

Stefan (00:34:08)

And this is basically what we do. So we check is this SyntaxNode, indeed, an invocation expression. So calling for example, calling a method. We have a look at this invocation. If it has indeed one argument, then it’s a good candidate. And additionally, we check if this expression… If it’s a member access expression, could be something else. And we also check the name. So behind this method name, actually hides ticket name that we want to generate for. And all of those candidates, we can then expect further. We basically now get this argument expression. So we extract the Enum constant that we invoke it with. So we basically extract this information and then put it into a list. And this list cannot then be reused by the rest of the source generator. So if the SyntaxReceiver, we can narrow down all the SyntaxNodes that we get to just a specific set of nodes that we may be interested in. And then comes the Execute step. So in the Execute step, we can now generate our source code. Let’s have a look at the parameter, the generator execution context.

Stefan (00:35:39)

So the generator execution context now holds all the information that we need. It has the compilation, it has pass options. For example, in the pass options, we can find out whether this project has been compiled with C# 9 or C# 10. There could be additional files. If we mark them with additional files to the compilation, there is also config options. So we could provide settings, for example, via a editor config or a global analyzer file, or also via MSBuild. We could mark specific MSBuild properties and control the generation via that. We have our SyntaxReceiver, that we had a [locum 00:36:21] and also cancellation token. So since the generator takes time, it’s part of the compilation. If we want to abort it, if we have a longer loop, we should from time to time try to cancel, if cancellation has been requested.

Stefan (00:36:37)

And now comes the meat of it. Here is our add source method, which takes a string, the hint name. This will be then shown in, for example, in Visual Studio per default. Source generated files are actually not put to disc. They’re just virtually there. There is an option to change that, for example, for debugging, but in general, they’re not available. But we can supply a hint name for the tooling to show this, for example, up here. And then we have our string source, which is actually this C# text that we want to add to the compilation. So there’s literally a string. It needs to be a string, a valid C# string, otherwise the compilation then would fail. So the generator could successfully generate invalid C#, which then would in the second step after the generator’s finished, fail the compilation.

Stefan (00:37:40)

And with that, I now highlight the most interesting portions, basically. First we see… We know we have a SyntaxReceiver. This is our Enum info SyntaxReceiver. I also check via the pass options is this C#. We can do this. This is this like standard Roslyn. We can check if this language is indeed C#. I also get the options, so this analyzer could also via the analyzer options… But let’s have a look at the generate source code. What I like to do is to use an indented text writer, so I can add tabs to the source code that I actually generate, but it could be any strings. There is different means. You could use a plain string builder or use other means of creating the C# string, maybe a more template based approach. Everything works. What we need is a string.

Stefan (00:38:42)

I always like to add the auto-generated on top. That’s the first thing. What we also should keep in mind is the language version, because we could technically generate a C# string in our generator. But what if the consuming project has its language version set to C# 9 or perhaps C# 7.3, because it’s a .NET standard 2.0 project, which is the default. Then for example, with C# 7.3, nullable reference types are not available yet. So nullable enable wouldn’t be meaningful, it actually would cause a compiler warning if this feature isn’t there. So always check… Well in this analyzer, I actually check, is this indeed at least C# 8. And if it is, then we add the nullable enable. So keep in mind about what language version you want to support. Then we add a name space, we add the class and so on and so forth. So basically now compile our string that we then want to spit out to the compilation. And eventually we to string this and at the very end… Then add this to our compilation. And then we have it in here and we can actually F9 it, F12 it. So actually let’s put this side aside and see the generator in action, because if I start removing stuff from here… Now on the right side, this actually gets updated. So if I remove everything, this here is the plain minimum that this generator generates. It basically just has a placeholder to enable intellisense, so at least generates this Enum info type with a GetName where we can feed in any Enum, but we throw an exception. This should actually never happen because if I start typing, if I say Enum info .GetName, and now pass Enum, let’s use a StringComparison.Ordinal. Let’s use ordinal. And now the generator picked that up. The SyntaxReceiver found this candidate and the Execute step, then added the source. And there we have it, a more performant way of how we could do basically to string, our Enums. Perhaps do I have… Did I want that in advance? I do have a… Oops.

Stefan (00:41:46)

I must have deleted it by accident, but I did want to branch off. Perhaps wanted it at the end, which actually shows now that this source generated approach is indeed faster than the basic to string. Now let’s come to some tooling which could help us building source generators. Oops. I already showed SharpLab with the Syntax Tree. This is also available in Visual Studio. If I jump into Visual Studio, let’s have a look at the SyntaxReceiver file. I have here the Syntax Visualizer. I do get that if I open the installer. Go to modify and select individual extensions of work. Is this now enabled by default, actually? I did not expect to see the… It’s called the .NET compiler SDK. So this is… Somewhere that we need to select this, not installed per default. I can’t find it right away, but it’s under this Visual Studio extension workload. And when we install this, we get the Syntax Visualizer and this shows now the same representation that SharpLab does, but basically now a code. So I can click here. If I click this field declaration, then it gets highlighted. So I see this is a field declaration then I can jump into it. What are the nodes? How is this tree assembled? Not a very handy tool is the Roslyn quota.

Stefan (00:43:53)

This now goes in the other direction. So if you would like to find out the other way. So if I put in a string… Let’s say public static class, my class. Now what would I need to do? Roslyn style in order to create this tree. So I can now generate this code. And I see down here, this is basically what I need to call via Roslyn in order to create a Syntax Tree, which we present this string up here. So I create a compilation unit, which has the class declaration. And this class has to modify us the public keyword and distract the keyword and so on. So this will go in the other direction.

Stefan (00:44:41)

And there is also a source browser. Perhaps you are familiar with source.dot.net, which is basically the source browser for the top net one time repository, where we can find types such as the cancellation token that I’ve mentioned previously. And we can then go to web access and directly jump to the source on GitHub. And this also exists for Roslyn, so there is sourceroslyn.io. And if I would search the ISourceGenerator interface, we see it. This can also jump to the webs access and we jump to the Roslyn repository and see how actually the compiler team is achieving all of this.

Stefan (00:45:39)

Now I would like to show a bit about debugging. So in Visual Studio, we can mark a project. There is Roslyn component, and also we can emit those. This is basically what I showed, what I would expect is actually that, maybe it works in this project. If we have a look at the… Close that. At the example, you see this example now consumes the generator.

Stefan (00:46:20)

In this case, I also need to set because I don’t reference it as a new package, a reference to project directly. I need to tell MSBuild that this is not a library, but this is actually a wasting component. So I say, I don’t want to reference the output assembly because there is no assembly I want to reference. I just want to enable this analyzer because Source Tree are technically analyzers. And in this case, I also set target framework. So this is how I can form an example project, consume my generator directly. And if I go to the dependency tab… Let me open it and then show it. Yes. We see that here on the dependencies, we have analyzers. This is the FCU generator that we have had a look at. Here is the Enum info generator and the Enum info generator generates this enuminfog.cs. So what you see here, this is this hint name that we give to the tooling so that we can display it here. And I can double click and then inspect what is actually generated. But also I can put this out physically to disk. So if I jump to this directory…

Stefan (00:47:42)

We can go to the obj folder and have the generated directory in here. Again, my generator, the info generator interior is the same file. But now here’s the physical file, the physical representation. How this is enabled is via [depository flex 00:48:02]. We can emit those compiler generated files and then we can tell MSBuild what the generator… What a tooling… Where to put them, where compiler generated file output path. I say the base intimate audio output path which is the obj folder, generated folder. And then we will find our generators in here.

Stefan (00:48:23)

In order to enable debugging in Visual Studio. We can also set a debugger profile. We set this to the debug Roslyn component and can then actually debug into source trailers. Let’s see if this is working. So I will, if I have a long… So we see here, this is the generator project, our top net standard 2.0 source generator. Here are the launch settings and in the launch settings, I now say the target project. So the project, which should be executed in order to run this generator. Tell it to use this example project. And let’s actually select this now as a startup project generator and do a break point in our Enum info generator.

Stefan (00:49:27)

Perhaps let’s just before we emit it. And there we are, and we see the hint name, it’s this enuminfog.cs. And the source text, let’s shift + F9 that. So here we have it, this actually also has text. Here is the string that we will output to the compilation. So this may help with debugging and stepping through code. There’s also another way via unit testing, this is actually my preferred way. If we have unit tests, I can also debug. So actually let’s keep that. Break point, if I go to my Enum info generator test…

Stefan (00:50:22)

There it is. Oh, this is actually the generator test. This one. Yes. So if you’re familiar with testing in Roslyn, the usual approach is to define an input string, and then we can expect an output string. So let’s jump to for example, this test. I do have here the string comparison, and I call this EnumInfo.GetName(). So this is basically the input text. And as an output text, I now expect what the source generator creates.

Stefan (00:51:04)

So indeed for this string comparison, this switch case that we want to generate. And we can then with this tooling, insert that later on, we verify, and we can also just debug this test. To see that, we will jump in here as well. So debugging works from a unit test in my opinion, better than via the debugging experience, because this also works for wider, for example. And it’s always good to have a unit test. Speaking of unit testing, how can we achieve unit testing? So there is a package for that. There is in the Microsoft.CodeAnalysis, Csharp.SourceGenerators.Testing.XUnit. There’s the xUnit flavor. This also exists for MSTest and NUnit. If we want to find out more about this also.

Stefan (00:52:00)

And if we want to find out more about this… I also linked this all in the presentation. But basically, if we search microsoft.codeanalysis.testing, this leads us to here. So this is in the Roslyn SDK project on GitHub. And with that, we can test analyzers, code fixes, also code refactorings and here our source generators. So with these packages, we can nicely unit test our source generators. And also, what I showed already is from the consuming project. We need to set it like this with… If it’s packed in a NuGet package, the most important part is to keep in mind to have always private assets all.

Stefan (00:53:06)

Speaking of NuGet package, I’m actually doing this right away. How does a generator look like in NuGet? So, let’s have a look at nuget.org. Let’s jump to the generator that we showed, F0.Generators, and inspect this in the in NuGet Package Explorer. Now, we see what’s actually in this package. So, we see we do not have a lib folder. We could have both. For example, the microsoft.login.extractions, this both supplies the library plus the generator. Here, this is a generator only. So what we have is since generators are technically analyzers, we see here Analyzer in .NET CS. Here is the generator. And since this is not the library, we have no dependencies or no target framework dependencies. Since this is a generator, we need at least .NET 3.5 SDK in order for the source generator to work.

Stefan (00:54:20)

Talking about nullable reference types. So, one thing about source generators is this actually applies to all Roslyn projects. They need to be .NET Standard 2.0. But the default language of .NET Standard 2.0 is C# 7.3. But for nullable reference types, we do require at least C# 8. So what I like to do is always multi-target the analyzer to always target the latest .NET target framework so that we get the latest nullable annotations. We enable nullable. But now, we need to be careful because an analyzer must be a netstandard2.0 project. So this is then for the NuGet package… I will show this also then in a second. We need to select the correct target framework. In order to get the nullable attributes, we can use this package, the nullable package. And for 2.0 target framework, we should actually suppress nullable warnings because the BCL is not annotated yet. So, it wouldn’t be very meaningful.

Stefan (00:55:31)

Now the Rosyln APIs, they actually are mostly annotated. Are they mostly? This is a lot of code or at least a lot of it is already annotated. So, these are fine. For the .NET 6 build, we would use that to get our nullable annotations right. But in netstandard2.0 that we do define as now oblivious basically, we wouldn’t mind because we already checked it for .NET 6 and we have warnings as errors. Then we are fine, good to go. Now, I want to say a bit about versioning. So, what I showed so far is what has been added in Roslyn 3.8 which is basically the .NET 5 SDK. It requires at least Visual Studio 16.8 or Rider 2020.3. Now, in Roslyn 3.9 which is the .NET 5.200 SDK, which is a readable since Visual Studio 16.9, we have a new feature for source generators. Let me briefly show that. So here, we now have a 3.9 generator. So this is basically how we control what feature set, what language set we have, what compatibility level depending on the package that we consume. It’s still a .NET Standard project. And what we have now is something called the post initialization. So, we can register for post initialization and put out source code basically unconditionally. Now, this generator post initialization context, it is much slimmer than the other one. We have no compilation here, no pass options, no additional files. So, we can basically do unconditional generation of codes that we always want to generate. And now, the thing is that other generators may depend on something which has been emitted by post initialization context. Everything which is emitted in the Execute step, where if you have motor generators cannot depend on each other. But everything which has been post initialized maybe there.

Stefan (00:57:55)

Actually, have here the evidence. So, I have here a syntax node receiver. Actually, where is it? There it is. So in the Execute, we have a look at the compilation. We get this post initialization type that we do create here in the post initialization name space. We create a type called Roslyn 3.9 and we can actually assert that this is not null. So, this really exists. And then basically say, “Okay, this type has really been found.” So, let’s see if this actually is there. If we again have a look, [inaudible 00:58:35] analyzers. Here is our… Oh no, in the example of course. Here’s our demo analyzer and here is this post initialization file. And this one we put out unconditionally. And the other one now says, “Okay, the post initialization step is found. It is available.”

Stefan (00:58:59)

Now, while I have this open, I also want to show a bit of a potential performance issue because source generators may be invoked many times. So if I go to the demo, and I hope we can see this if I… Yes, it works. So, I hit Control D to duplicate this line. And we see on the right side that more and more nodes are visited. So, the more code we have. So this counter is actually, if we have a look at the generator, we do have a syntax receiver here and this syntax receiver just increments the node visited. It counts every time it receives any node. And then, we just put this out in this comment, rewrite it here. And this is what we see there that the more code we have, the more often we invoked this method. And this is now invoked for basically every key press. So if I start typing a comment, now this is invoked. And if I add more code or add a new file and the more I type, the more it is invoked.

Stefan (01:00:07)

And if we have a small project, this is no problem. If we have a big project, it’s most likely still not a problem. But if we have a huge project, for example, the .NET runtime repo, now this may cause huge performance issues. And actually, since this is [inaudible 01:00:24] Visual Studio is actually also called a Visual Studio, then may decrease the editing experience. And we may now experience lagging between our key presses because the generator or Roslyn NET generator kicks in all the time and with every key press emits everything new. And this is not very fun to edit with.

Stefan (01:00:46)

And this performance issue has been addressed with the latest version of source generators which have been added now with Roslyn 4.0 which require at least the .NET 6 SDK, which in turn at least is available in Rider 2021.3 or in Visual Studio 17, which is Visual Studio 2022. Which means now if we will upgrade our source generators from 3.8 to 4.0, this no longer works with the .NET 5 SDK. We require at least the .NET 6 SDK. But since I think .NET 5 is running out of support in May, I guess this is fine to do depending on your use case. Now, how does this .NET 6 style generators work? I can show you another project. This is yet another analyzer that we have a look at. For example, I will briefly show the example before. Let me change here something. Let’s change this to… No, let’s show this first.

Stefan (01:01:57)

We have here a small program which uses a record class. However, a record class, so that is actually not a record, the record feature itself, but the init keyword that a record is utilizing. If we have a look at #lab, I’ll put in this record here. Have a look at the C# version of the code. And let’s move that. Yes, we see that in the generated code this property that we had defined here uses the init keyword. It is compiler generated. But in order for this to work, we need that IsExternalInit type available which is this one. I have 12 of this. So, this is available in the system one time. Actually, let’s have a look since when is this available? This is available since . NET 5. So, I couldn’t use a record type in something lower than .NET 5. Let me change this to NET 472. So, I’m now switching to .NET Framework.

Stefan (01:03:15)

Now, .NET Framework doesn’t have this type available. And although I set the language version to default which is… I’m using .NET 6 SDK here which is C# 10 and I build, this project still compiles because I have a source generator now supplying this type. If I have 12 of this now, we will see this now is no longer the type from .NET, but this is now source generator type. So the compiler, basically it just needs this type available in order for the init only feature to work. And how does this generator work? This is now actually the newer version, a Roslyn 4.0 [C Node 01:03:59] Generator which is called an incremental generator. And the programming model with incremental generators is a little bit different than with regular source generators. Under the hood, an incremental generator, as the name suggests, tries to do stuff incrementally. So, we build up a pipeline which then gets executed.

Stefan (01:04:28)

And during this execution of the pipeline, the driver of the generator figures out which parts have already been executed and if they produce the same output as before because if the same output as before… Or if they receive the same input as before. Because if they receive the same input, we don’t need to produce all the output that we already cached before again. So with the combination of this pipeline and caching, incremental generator now can improve the editing experience for Visual Studio so that we don’t generate everything and you only because we hit semicolon somewhere. Now, to mention that if we do a .NET build or want to see I system, there won’t be any difference between using… Or no noticeable difference between using I source generators or incremental generators. But from an editing experience, depending on the size of the project, the experience may vary tremendously.

Stefan (01:05:33)

So with an I incremental source generator, we only have one single method. We have initialized method. So the syntax receiver is gone because the syntax receiver, this was or could be the potential performance issue because it gets a lot of nodes all the time. So, this is gone now. This looks now a little bit different. The compilation and the past options and the additional texts, we don’t have them available directly but via this incremental value provider. And this is the core type that we use in order to build up our pipeline. So first, we have a syntax provider. We can create a syntax provider by feeding in two callbacks. We first have a predicate. This basically returns bool which says, “Are we interested in the nodes?” So, this is now the comparable part or the equivalent to the syntax receiver.

Stefan (01:06:34)

Let’s have a look at this method because this method now again gets in the syntax node just as the on syntax node visited from the previous syntax receiver. And returns bool saying, “Is this node interesting for us? Could it be a potential candidate for a generator?” What you do in here, we see is this a record declaration? And does this record declaration has at least one parameter? Because then, init only property is admitted. The alternative would be if we have an access declaration syntax, basically if we define the init keyword ourself. So here, we have the explicit usage of init. Down here, we have the implicit usage of init. And if it does, then this node is interesting for us. Then we actually do have to emit this type if it’s not available. The second method to this syntax provider is the syntax transform which now basically receives the node. And we can return pretty much anything.

Stefan (01:07:48)

We could transform it to whatever we need in order for our pipeline then to be meaningful. In that case, I basically just return the node. But we could turn it, for example, into a string or a collection of something. And then, we combine that step with the next step of the pipeline because now I need to find out if this type that I potentially need to generate is already there. So basically, do we have a target framework which already defines this type? What made this type… Is external init type already defined manually? And for that, we need the compilation. But we don’t have the compilation directly. We only have in the context a compilation provider which is incremental value provider. And this is now where the pipeline kicks in. So, we combine that. We combine the compilation provider with our previous step. And then, we get a combination. So, this is actually this many tuple here which now holds the collection of our syntax nodes and the compilation.

Stefan (01:08:51)

I also want to see what language version this is. Is this actually a language version which supports the init only keyword? Because it doesn’t make sense to emit this type. And for this, I need the parse options. So again, I now get the parse options provider and combine that with the previous step with this tuple up here. And once I have combined, I could now also combine this with more. For example, in my generator, if I do need additional files like the additional text or if I do need the options, I can keep combining them until I have this one big provider that I then register my output with. And in this, which is the output we see, we have here tuple with multiple left and rights. And if I jump into this callback, we actually see this tuple being fed in. And from here, we can now do our generation logic. So in our case, we find out do is this external init keyword already defined in the compilation? What’s the C# language version? And so on and so forth.

Stefan (01:09:59)

And then eventually, again from a generation point of view, it’s basically the same. We need some string that we can then put out into our compilation and actually end up with this IsExternalInit type emitted if we need it. And this is now the different programming model of incremental generators. So this is a little bit less straightforward, but eliminates potential performance issues. Basically mentioned that. Yeah, so this is the consideration about performance. Our source generator, they’re not async. So, they’re all sync because while we are waiting for this compilation so that we can continue editing in Visual Studio and Rider. So if we’re doing something more expensive or some longer loop, then regularly check if the cancellation has been requested already because it doesn’t make any sense to keep generating something which will be discarded anyway or is already basically outdated. And we discussed the issue of the syntax receiver in huge project or a potential issue.

Stefan (01:11:20)

Talking about NuGet. Now, if you want to pack this analyzer and ship it on NuGet, I briefly mentioned it already, what we want to do is to always set a development dependency to true because this will then per default… If we .NET install this package, then per default, we get the private assets all which you always want because analyzers per default shouldn’t flow to tentative dependencies. And since we did multi-targeting, our analyzer targets both .NET 6 and .NET Standard 2.0. We need to select the correct target framework which is Standard 2.0. Analyzer must be .NET Standard 2.0. So, this is what we can do with this line. We indeed select the NET standard 2.0 target and pack it into this directory. This is where the analyzer lives in the NuGet package. We saw that already.

Stefan (01:12:16)

And for the end, I would like to highlight… Actually, almost the end. I want to highlight a couple of examples. I’ll link this all in the presentation. For example, there is a C# source generator repository where the community has collected a lot of generators. So maybe, there is one already in there which solves an issue of yours. Recently also, the community toolkit has been released which has a source generator for… If I find it. Component only, yes. For the I notify property changed event. So I notify property changed is usually something that is tedious implementation. It always follows a very similar pattern. So, we can have a source generator generating this for us. The user of this generator has less code to manage and to maintain.

Stefan (01:13:26)

I mentioned testing. With this testing package, we can very nicely use… We can debug analyzes. We can test our… Sorry, generators. We can test our generators. We can also do code coverage on our generators and also more advanced scenarios such as mutation testing. So, I’m using Stryker.NET here for mutation testing. And I want to show the report here. So, let’s have a look at the generator that we inspected, the EnumInfo generator. And we basically run the command line tool .NET Stryker against it. And since we have basically our X unit or end unit or MSTest Suite, this works just perfectly. If you’re not familiar with mutation testing, there is quite a lot of video tests on the web. But if you are, we see we have killed a lot of mutants. A couple of them have survived. I seem to not exclude my debug asserts here and see what else has survived here. Here, these seem if we… Let’s see what mutation testing does. So mutation testing changes this greater than or equal to, to just a greater than and then no test fails.

Stefan (01:14:51)

I definitely need to add a couple of tests to this very project. You can also do benchmarking both on generators and also the generated code. With the generator itself, it’s a little bit more tricky because the driver that we use does internally some caching. So we can’t get, for example if benchmark.net, not the most precise reports. But we could get an indicator where we basically… Have a look at an example. Benchmark where we again feed in some source to this driver, and then basically run our benchmark against where we invoked it. I just briefly step over it. Somewhere is a generator driver that we use in order to run the generator and update the compilation. And we can also benchmark the generated code.

Stefan (01:15:58)

I’m afraid I might have deleted the report, but basically I here have a bit of a benchmark where I compare the generated version of this EnumInfo.GetName, but also the regular Enum.ToString and the already existing Enum.GetName. And here, you would then see that the generated version, if I step into it, this is our generated version, is measurably faster and also allocates less than… Actually, it allocates nothing. So, this is also allocation free. And the Enum ToString or the Enum GetName actually allocates a little bit. And yeah, if you would like to know more about source generators, in this presentation I’ll link here a couple of videos. For example, a recent video here is about how incremental short generators work under the hood and how this pipeline is internally managed, that this data internally managed and updated or reused.

Stefan (01:17:10)

And yeah, before we hand over to the Q and A section, Gael is about to show something very exciting. I just want to have one more note. So if you have, later on after the Q and A, some questions, then you may contact me on Twitter. I am also now twitch streaming on Twitch.Flashover. And actually, in two weeks on Saturday, I will be doing a live stream on source generators where we will build live source generator. All of the source is available here. I will put it into the webinar chat if you want to check it out later.

Metalama demo

Gael (01:17:53)

Thank you, Stephan. It was amazing, very detailed. I learned a lot even if I’ve been working with source generators.

Gael (01:18:00)

I’ve been working with source generators for a couple of months and didn’t knew everything. I would like to show a product that, actually, we’ve just released a preview of this week. Let me share my screen. I will do a very short introduction to this.

Gael (01:19:05)

So, what you have shown, Stefan, this is actually, from my point of view, very, very low level development, because you need to cope with the syntax nodes, with the semantic model. And, I think, that what Microsoft did with that, is a low level API to extend Roslyn. And, there are many of these APIs. You mentioned analyzers, there are also suppressors, there are code fixes and code refactorings. And, we have built a product evolved from PostSharp, or the same kind of project as PostSharp, for high level metaprogramming. So, you can do all these things, but without going to the syntax node.

Gael (01:20:07)

And, the concept of Metalama, is that we have a templating language that we call T#. T#, like templating for C#. And, this is an example of a template that does login. So, this is the template. You see, what is great here, is compile time. So this is a compile time expression. So, we are entering methods. Method that proceeds here, is the call to the method that we are templating. And then, after that, we are writing another message. So, I can apply the template to any method, and I can preview what the template does to my code. So, you see here, we are doing code generation, but there are big differences with the Roslyn code generators, because we are able to modify a method itself, not just a partial [inaudible 01:21:18], so it is not additive only, it can also add things.

Gael (01:21:22)

So, this is one example. Another simple example of a template you may do is, for instance, we are invoking the method inside the loop. We have also a catch and win, and we can still retry. We are just sleeping and looping. So, if we apply this retry to a method, and I’m doing the preview, we can see what Metalama will do with the code. So, the difference is that here, it would do this with the code at compile time, because at run time it doesn’t want to edit your code. There would be no point.

Gael (01:22:17)

And, you can also do more complex templates. So, here I would like to do login with parameters. So, instead of just saying, “I’m starting the method,” I will like to have the method name with the parameters, and I could add another parameter and see what it is doing.

Gael (01:22:43)

So, I can do the diff, and this aspect, or this template… We can call that an aspect… is printing the name of each parameter. So, it’s actually just a more complete, more complex template, where we are creating an interpolated StringBuilder. We are going through the parameters, we are adding text tokens. And, you see, this is C#. It is not Roslyn programming. It is C#, it’s the same level of abstraction of programming as you are using, when you are doing uWISE, who has business applications. You don’t need to go into the details of Roslyn. And, we can also do what source generators can do, like introducing new things. Here, this template, that we can apply to a type, we are introducing a property into that type, and I can do the preview to see what’s happening. So, the good thing, is that we can combine.

Gael (01:23:56)

So, you have shown the implementation of notify property change through source generators. I, frankly, believe this is a shame, not of you, Stefan, but of Microsoft, to show notify property change as an example of source generators. Because, what they you do, is that they’re transforming fields into properties. So, generating fields from properties. So, actually, you are losing the idiomatic C#. It’s no longer a C#, what you are doing is just hacking. So, we know how to do notify property change. We’ve done that for 10 years. You don’t want to change your properties into fields, just to have your source generators to work.

Gael (01:24:48)

So, here we have a more complex aspect that, actually, combines two things. We are introducing a method, we are introducing a property, we are implementing the interface and, at the same time, we are changing all property setters to detect a change in the value. This is the right way to do that, because your existing code doesn’t need to be changed. Source generators can be abused, they have been abused. You should not do that. That’s a matter of opinion, that we could discuss.

Gael (01:25:20)

So now, if you apply notify property change to an existing code, actually, what it’s going to do, is to change your code, to inject the logic. It applies the templates. And, you can notice that we can, actually, call from the source code, the code that has been implemented. Something that was not possible with PostSharp, because we did that at post compile time. And, this is because we are able to do that, because we integrate with source generators. So, here you can see that we have our source generators, which defines the new methods. You will see these new methods are not implemented. I believe there is no point at all in generating at design time, the bodies of methods. It just takes CTU time. It is useless. It’s only useful when you compile that. So, we don’t do that, we only define the declarations, not the body.

Gael (01:26:31)

Now, if you choose here, LAMA debug configuration. You can also debug the source that we have produced. So, you can debug the generated code, the transformer code. Now you get into the property settor. So, we have that.

Gael (01:26:52)

And, that’s all I wanted to show today, because there is a lot more. You can also emit warnings, suppress warnings. I’ve shown that you can do code fixes and code refactorings using templates. I didn’t show that. And, I have also shown the code transformers. That’s all I wanted to show today, because that’s not a webinar about Metalama. But, if you are interested, there are two things you can do. So, if you are interested to try it today, you can go to our website, postsharp.net, into the Metalama section. There is also a sandbox, an online sandbox, actually a fork of Try.NET. So, this is a fork, where you can try Metalama without leaving your browser. The next webinar, on March 15th, will be about Metalama. The exact time is not decided yet, but we will perhaps do two times in the day. One in the European afternoon, the second in the European evening. That’s all I have for MetaLama.

Questions

Gael (01:28:22)

And, let’s see if there are questions? To see questions, I may need to stop sharing. And, I don’t see questions. But, if there is no question, I have one question, actually. You have shown mutation test. Could you elaborate? It’s the first time I heard about mutation test, and I would like to hear, what is it I’m missing?

Stefan (01:28:58)

Perhaps, let me briefly dig out the example that I have. There’s there was, actually, a mutation testing talk that I also gave at the NTC Oslo, so let me fire up that solution. I like to show it by example.

Gael (01:29:23)

You need to share a screen again.

Stefan (01:29:24)

Yes, yes, yes. I’m on it. Oops, don’t want to cause any recursion. So, in this example, let’s actually run all the tests and see if everything indeed turn green.

Stefan (01:29:47)

And, I have a bit of a… Oops. Briefly get rid of that by terminating global chase number. I believe I’m pinning to an older, it’s the key version, which I no longer have. And, for example, I have here a naive level calculator where I can add numbers, I subtract numbers, I can multiply, and so on and so forth. And, actually, I do have a little bug in here. I do have the divide method. I have here, by accident, also a multiplication, so I know I did a bug here. But, what troubles me now, that all my tests are green. And, actually, if I would do a coverage, a code coverage, report on that, I would see that I have 100% code coverage. I got everything covered. It’s actually jumped to the test that we have here, so I call this, “Divide.” I divide one by one and expect one. And, I divide 240 by zero, and then expect this NIN that I defined, [inaudible 01:31:03] exception.

Stefan (01:31:04)

So, I have 100% code coverage, but I did not discover the bug here, because my test suite isn’t good enough, and this is what mutation testing now uncovers. So, what mutation testing does, it has a look at tests, sees what is the project under test? And then, from this project under test, actually mutates the source code. So, it goes ahead and changes, for example, this addition to a negation. So, it created one mutant. Now, with this modified source, now all the tests are one again. Or, at least, all impacted tests, which is tests which actually now cover this production line.

Stefan (01:31:55)

And now, we expect at least one test to fail, and it does. So, we see here, at least one test is failing, and this is good. So, we have produced the mutant, but now one test is failing. That’s good. So, the mutant has been killed. So, we not only cover… Oh, where is it? So, we only cover this line of code, we also have an assertion that protects this line of code, or this very statement. And then, rotation testing continues and changes again, for example, a minus to a plus, and so on and so forth. And, eventually, down here, and I would change this multiplication, for example, to a plus, something else. And now, I got… Sorry, actually, it changes, in that case, to a division. Oops, I hit [inaudible 01:32:53]. And, we see now, we have no failing test. Although we changed this, basically we fixed it. But, we have no failing test, which means that this line isn’t… Well, it is covered, but it’s not 100% asserted. So, changing the code, won’t make a test to fail. So, I could now add a better test. Let’s say, I divide four by two and expect two. And, I restore the original code, and now I have improved my test suite. And now, my test suite fails, and now I really discovered that bug.

Stefan (01:33:40)

So, mutation testing, in short, it creates mutants of the production code and each mutant expects at least one test to fail. And, if a test is failing, the mutant is killed. That’s excellent. And, if no test fails, then the mutant survives. And, we then, at the end, at the mutation coverage report… Do we still have it open from previously? No, I’m afraid I don’t. But, in this mutation test report, we then can see, “Okay, what has been mutated? Where is our test suite not 100% effective yet, so we can improve our tests and eventually then fix more bugs?”

Gael (01:34:19)

Okay. We have a question from the audience, “Do you know if the .NET Standard 2 restriction for source generators will be changed in the future, so we can use later target frameworks like .NET Standard 2.1 or higher?”

Stefan (01:34:37)

I’m afraid, I don’t know that. I am not sure. I believe, but this is my understanding of it, that as long as Visual Studio is working as it is, and is on .NET Framework, then the .NET Standard restriction will keep alive, for backwards compatibility. If there are any plans to change it in the future, maybe there is a different solution around it, but I’m not aware of that, I’m afraid.

Gael (01:35:11)

Yes. I agree. I-

Stefan (01:35:12)

This not only impacts source generators, this impacts Roslyn, plugins, like code fixers, analyzers, in general.

Gael (01:35:22)

Yeah. We had the same discussion internally, and it needs to stay .NET Standard 2. 0. The way we solve that in Metalama, is that only the compile time code needs to be to .NET Standard 2.0, because actually, when we compile your project, we split it between run time and compile time. But, anything that is compile time, it must be .NET Standard 2.0. I was disappointed to see that visual studio 2022 was on the .NET Framework run time and not on the . NET 6 run time. That would have solved a lot of problems, but I think it is not so easy.

Gael (01:36:07)

We have another question. “If we don’t emit the generator to disk, which is recommended, I believe, does this affect the debugging experience, if I want to step into the generator code from this project?”

Stefan (01:36:23)

If I understand correctly, actually it’s the other round. So, for protection projects, it’s not recommended to really emit the files to disk. Those are just on the [inaudible 01:36:36] for debugging, because it will take additional resources to put this file to disk, so we need to enable that. Per default, they are not emitted to disk, so we need to explicitly enable this for this debugging experience, that we can view, for example. What I like to do, is then have, in a second screen, the generated file in, while I edit the source generator. So, per default, it’s off. For debugging, we need to enable it. But, I don’t recommend to enable it for the production project itself, I like to enable it for a small testing project there. I always have a sample project, rather than actually integration test the generator.

Gael (01:37:27)

So, by default, the files are in the PDB. So, the generated files are in the PDB. I’m not sure if it’s in all PDBs or just in the new PDB format. Do you know anything about that?

Stefan (01:37:40)

Oh, I’m afraid, I don’t. No.

Gael (01:37:44)

And so, pay attention. Well, you should not ship your PDB, if your source code is confidential. But, with source generators, your PDB really contains source code, but only the generated one.

Stefan (01:38:01)

Which, is also now an interesting thing, to be careful about which… Well, this applies for every Newgate package, but from which source we consume source generators? Well, we could depend on a malicious package, which has malicious code, which maybe read my local files and puts them to a server. But, we could also, with a source generator, now a tech could have, basically, access to our source code, to a source tree. So, from a security point of view, this is something to consider, safe sources for our generators, because they could be, technically, abused for an attack, or for malicious evilness.

Gael (01:38:52)

You mean, because the generator or the analyzer runs inside the Visual Studio, [crosstalk 01:39:04]

Stefan (01:39:04)

Not specific in Visual Studio, but in general.

Gael (01:39:07)

Yes.

Stefan (01:39:08)

Since the generator sees the source. Now, the tooling could put it out to file for us to debug, but an attacker, a malicious package, could put this to some server and could leak our source code, yeah.

Gael (01:39:24)

But, this was there from the beginning of Newgate. You could have a MSBuild task that was a DLL, and then you got access to the local machine.

Stefan (01:39:36)

Oh, okay.

Gael (01:39:37)

Yeah.

Stefan (01:39:37)

I wasn’t aware of that.

Gael (01:39:41)

In the beginning of Newgate, it was a problem. So, actually, companies didn’t allow… Well, not all companies, of course, but corporates didn’t allow to consume packages from newgate.org, but they had their own repositories of curated repositories. But now, there are decent security settings on Newgate and you can define trusted publishers. So, for instance, we are signing all our packages. So, if you trust us, you can add our signature to your list of trusted publishers. But, yes, the risk, it was always there. If you consume a Newgate package, you just open the doors of your computer to the author.

Gael (01:40:33)

Good. We have run way over time, but it was very interesting. And, I guess, people who were not interested, left already, so we could continue. But anyway, I don’t see more questions from the audience. So, Stefan, thank you very much for this evening, for this presentation. It was very interesting. And, next month there will be a more detailed presentation about Metalama, based on source generators and analyzers, and all these technologies. No longer on MS [inaudible 01:41:12]. And, if you want more details faster, you can go to our website today. Thank you very much. I’m going to stop the recording and stop the webinar in a minute. Thank you, and bye-bye.

Stefan (01:41:27)

Thank very you much. Thank you for having me.

Gael (01:41:28)

Bye.

Stefan (01:41:29)

I had a lot of fun, and I’m looking forward to the Metalama presentation.

Gael (01:41:33)

Thank you. Bye-bye.

Stefan (01:41:34)

Bye-bye.

Webinar Invite: Metalama, the new Roslyn-based meta-programming framework from PostSharp

$
0
0

Join us on Tuesday, March 15th at 15:00 UTC or 20:00 UTC when Gael Fraiteur, founder and president of PostSharp Technologies, will introduce Metalama, a new meta-programming framework built on Roslyn for modern .NET.

Metalama profits from the 15 years of experience the team gathered while developing and supporting the IL-based PostSharp, but is a completely new implementation based on 2022 .NET. Metalama will cover all use cases of PostSharp.IL without dragging the legacy.

Additionally, Metalama will have a comprehensive design-time experience and a broad platform support.

During this webinar, you will learn:

  • How to eliminate boilerplate with code templates named aspects, so that you’re free for more meaningful development tasks.
  • How to empower your team with custom code fixes and refactorings, to that you improve everyone’s productivity.
  • How to validate your codebase against your own rules so that you can provide immediate feedback to your team members, make code reviews smoother, and improve the team’s alignment.
  • What is the current development status and roadmap of Metalama.
  • If and when you should migrate your code from PostSharp.IL.

Reserve your spot today to receive a calendar invite and, after the event, a link to the recording and the transcript.

SIGN UP FOR 15:00 UTCSIGN UP FOR 20:00 UTC

Action required: Update PostSharp before updating Visual Studio 2022 to v17.2

$
0
0

We want to notify you that PostSharp Tools for Visual Studio break the debugging experience after updating Visual Studio 2022 to version 17.2. When the debugger starts, the Visual Studio gets unresponsive and has to be killed. The solution is to update PostSharp Tools for Visual Studio to version 6.10.9 or newer.

We apologize for the inconvenience.

Who is affected?

This problem affects users of PostSharp Tools for Visual Studio versions older than 6.10.9 updating Visual Studio 2022 to version 17.2.

What will happen?

Immediately after starting a debugging session – either by starting a project with debugging or by attaching the debugger to a running process – Visual Studio becomes unresponsive and has to be killed.

What can you do?

To resolve this you need to update PostSharp Tools for Visual Studio to version 6.10.9 or newer. You can download the tool at https://www.postsharp.net/download.


Metalama Status Update (April 2022)

$
0
0

It has been two months since our announcement of Metalama and we wanted to give some update since we have been publishing new builds at a sustained pace in the meantime.

First, we are grateful for the attention Metalama got from the community. A few folks started to try the new framework and reported some very relevant feedback. We have solved all reported bugs – a couple dozen in total. It’s very interesting for us to see how you guys are trying to use the framework and what obstacles or difficulties you encounter.

Our special thanks go to Dom Sinclair for his review and edits of the documentation from a native speaker’s perspective. We have changed our all advices to the rightly spelled advice as a result :-).

New features

While addressing community feedback, we have also been busy building new features:

  • Support for Visual Studio 17.1 and Roslyn 4.1.0. We can now support many versions of Roslyn in the same set of packages.
  • Exclusion of aspects. To prevent a declaration from being targeted by a fabric, use the ExcludeAspectAttribute custom attribute.
  • Initializers. You can add initializers to introduced fields, properties and events. You can also inject initialization logic into object and type constructors. See Adding Initializers for details.
  • Require aspect. The RequireAspect method allows a parent aspect to add a child aspect, but only if the aspect has not been added by a different path.
  • Properties and indexers split. The IProperty interface no longer represents indexers and the Properties collection no longer expose them. We now have IIndexer and Indexers.
  • Incremental source generators. We have migrated our implementation of source generators to the new incremental API.
  • Code fix: change member accessibility. You can now change the visibility of a member from a custom code fix with the CodeFixFactory.ChangeAccessibility method.
  • Documentation. We have completed chapters about fabrics, validation and custom code fixes.

What’s next

Metalama 1.0 is now almost feature-complete and you should no longer see large API additions.

In the next weeks, we will be focusing on the following:

  • Testing and bug fixing
  • Documentation:
  • Adding a proper API to implement parameter/field/property validation
  • Telemetry
  • Licensing

At the current pace, we expect to be code complete in June, which means that we can hope for general release after in September, after the summer break.

In the mean time, your feedback is greatly appreciated and, most likely, can have large impact on the final product.

Happy meta-programming with Metalama!

-gael

Metalama Status Update (May 2022)

$
0
0

It has been another month since our last update so I wanted to give you a fresh status briefing.

New features

  • Completely automated multi-repo deployment. Our build and deployment process is now completely integrated. We can now ship, in just a few clicks, all kinds of artifacts coming from 9 different git repos (and counting). We have created a custom build integration front-end, free and open source on GitHub.

  • Contracts allow you to validate or normalize the value assigned to field, properties, or parameters. Check the documentation for details. There is a great example that validates all non-nullable parameters of public methods in the project, just with a few lines of code.

  • Template Parameters and Generic Templates. Template can now have compile-time parameters and type parameters (i.e. generic parameters). Generic templates are especially convenient when your template code needs to use a generic method or type whose generic argument depends on the type of the declaration to which the aspect is applied. For details, see the documentation of this feature.

  • Telemetry. Anonymous error and usage reports now get automatically uploaded. You can of course opt out.

  • Extensibility examples. We fixed several bugs around the extensibility of Metalama using the Roslyn API. We are not completely finished with this use case, but you can already look at the following examples:

  • Documentation. We have completed chapters the following articles:

New feature gap

We have decided to add a new feature to Metalama 1.0: the ability to pull a dependency from the constructor. It seems important to implement aspects that need to consume a dependency from a container.

What’s next

We’re now really close to a feature-complete release. The only feature gap is the one we have recently discovered - pulling dependencies.

In the next weeks, we will be focusing on the following:

  • Testing and bug fixing
  • Documentation:
    • Migration from PostSharp
  • Licensing

We still expect to be code complete in June and to spend the summer in stabilizing everything.

In the mean time, your feedback is greatly appreciated and, most likely, can have large impact on the final product.

Happy meta-programming with Metalama!

-gael

Metalama Status Update (June 2022)

$
0
0

It’s time for another status update. The big announcement of this month is that Metalama 1.0 is now feature-complete after we have added support for dependency injection in aspects.

Dependency Injection

In June, we have focused on the support for dependency injection. It is now possible for an aspect to use a dependency without knowing which dependency injection framework is used in the project using the aspect. The implementation of this feature is open source. It consists in a highly extensible abstraction, as well as two first implementations: one for the standard injection patterns of .NET Core (i.e. Microsoft.Extension.DependencyInjection), the second for a classic service locator pattern (a good fit for objects that are not instantiated by the container).

Here is an example where a LogAttribute aspect pulls a dependency of type IMessageWriter you can see that the aspect code is very simple and does not know anything about the dependency injection pattern.

For details regarding dependency injection in Metalama, see this documentation article.

The dependency injection feature was made possible thanks to the following user stories:

Other features

  • The test framework now supports concurrent processing on different cores.
  • You can now override and introduce a finalizer.
  • We have updated our compiler to Roslyn 4.2.
  • 38 bug fixes and minor enhancements just in June.

What’s next

All features planned for 1.0 are now implemented, however some of these features still have some gaps. Therefore we are not code complete as we hoped last month.

Here is what is still on our to-do list:

  • Gaps in existing features:
    • advising operators,
    • proper testing with structs and records,
    • implementing generic interfaces,
    • improving code generation patterns,
    • getting System.Reflection object for declarations introduced by aspects.
  • Documenting and easing the migration from PostSharp
  • Licensing

Therefore we will spend most of the summer filling these gaps and improving our standard of testing.

As always, your feedback is greatly appreciated and, most likely, can have large impact on the final product. To get instant answers, the best is still to join our Slack community.

Happy meta-programming with Metalama!

-gael

Metalama Status Update (July 2022)

$
0
0

It has been another month since our last update. July and August are traditionally lazy months for us as team members enjoy several weeks of vacation. But despite the fewer working hours, our overall feeling is that our pace has significantly accelerated. As we announced last month, Metalama is now feature-complete. Therefore, we can now focus on fixing bugs and closing the remaining gaps in the C# syntaxes supported by Metalama. All these remaining gaps or defects are now counted in hours or days, and no longer in weeks.

Here is what we’ve managed to complete in July on the Metalama project:

  • 40 bug fixes
  • Proper support for structs and records
  • Overriding default interface member implementations
  • Overriding partial methods
  • Aspects with many layers
  • Introduce and override operators
  • Add contracts to constructor parameters
  • Improvement of supportability (better error reporting, optional automatic creation of process dumps upon exception)

A few customers have started to try Metalama on large solutions. Honestly, this is still risky business: they all encountered several blocking bugs. Good news is that we were able to diagnose and fix all bugs with a cycle time of 2 or 3 days – yes, 2 days between the bug report and the deployment of a fix. This is the sign that our engineering processes (continuous integration) and our code base are both in good health, so we can hope to converge to a stable version in Autumn 2022.

If you want to try Metalama now, please note that August will also be vacation-heavy on our side and we will not be always able have such a short cycle time. So, if you are not willing to wait a couple of weeks for a bug fix, it can be better to wait until September.

What’s next

Here is our updated to-do list:

  • Gaps in existing features:
    • implementing generic interfaces,
    • getting System.Reflection object for declarations introduced by aspects.
    • design-time cross-project cache invalidation
  • Documenting and easing the migration from PostSharp
  • Licensing

As always, your feedback is greatly appreciated and, most likely, can have large impact on the final product. To get instant answers, the best is still to join our Slack community.

Have a nice summer,

-gael

Metalama Status Update (September 2022)

$
0
0

There was no status report in August because of vacation, so today I will cover the last two months. In short, we have been making Metalama faster and more reliable and robust without extending its set of features.

What did we achieve?

The later summer was characterized by three large refactorings that required several weeks each:

  • Cross-project aspects at design-time: caching, cache dependencies and cache invalidation. In previous builds, cross-project dependencies were not implemented, so changes in one project were not reflected in dependent projects.
  • Refactoring of the aspect linker, an important and complex internal component that links aspects and source code together. The component accumulated hacks and debts and we could no longer fix sophisticated bugs being reported, so several weeks of work were necessary to clean it up.
  • Parallel compilation: Metalama now uses all cores on your machine by default.

Additionally, we have implemented the following user stories:

  • Support for Roslyn 4.3 (not yet for .NET Framework 4.8.1 on ARM64 – please ask us on support if you need it).
  • Implementation of licensing (to be announced).
  • Automatic clean up of temporary and cache files.
  • Introduction of custom attributes by aspects.
  • Filling gaps in the generation of System.Reflection objects from compile-time code for run-time usage (methods like ToType(), ToFieldInfo(), ToMethodInfo(), …).

We also fixed more than 50 bugs in the last two months.

We started brute force testing of Metalama on the NopCommerce open-source application. We’ve already discovered and fixed several issues (principally a big performance issue addressed by parallel compilation), but we cannot yet announce that we can transform NopCommerce without error.

As you can see from the above, we have been principally working on robustness, sometimes on small new user stories, but no longer on new features.

What is the status?

Because of the work on these three major PRs, we were hindered, during several weeks, in our ability to fix bugs in short time. Also, more users started to use Metalama for personal projects and on experimental branches of their work projects. These users reported dozens of bugs in the last weeks. As a result of these two factors, and of our brute force testing on NopCommerce, our bug backlog has become longer. Because of that, it is not yet a good idea to use Metalama for anything else than personal or experimental projects at the moment and for couple next weeks to come.

On the positive side, we no longer expect any complex and large change. We have implemented all large user stories, and bugs that have been reported seem to require only little work. Therefore, we are confident that the quality of the codebase will now only increase.

We hope to have an RC by the end of Autumn 2022.

What’s next

As I mentioned, all user stories except a few minor ones (introduction of generic interfaces, overriding of indexers) have now been implemented.

We will mostly focus on our bug backlog during the next weeks, so that we are again able to support users who want to give Metalama a try.

As always, your feedback is greatly appreciated and, most likely, can have large impact on the final product. To get instant answers, the best is still to join our Slack community.

Happy llama!

-gael

Viewing all 419 articles
Browse latest View live