Quantcast
Channel: PostSharp Blog
Viewing all 419 articles
Browse latest View live

PostSharp internals: Handling C# 8.0 nullable reference types

$
0
0

C# 8.0 introduces nullable reference types. These are yet another way to specify that a given parameter, variable or return value can be null or not. But, unlike attribute libraries such as JetBrains.Annotations, Microsoft Code Contracts or PostSharp Contracts, nullable reference types have built-in compiler support and so we can hope for a much wider adoption.

In PostSharp 6.4, we added full support for C# 8.0 and that includes nullable reference types. In this article, I discuss what we needed to do to implement this feature.

PostSharp as a code weaver

PostSharp is primarily a code weaver: it modifies the assembly produced by the C# compiler. As such, it works on your own code, and you may already be using nullable reference types. Did PostSharp need to do anything to keep working well in such scenarios?

It turns out that no, not really. Everything just works. Of course, C# doesn't have an up-to-date specification anymore, so it's hard to say for certain. But based on our tests and the information we collected from the web (especially the nullable-metadata.md file and the description of the new nullable-related attributes), PostSharp would continue working fine even had we done nothing. And we did a lot of testing, in fact, most of the time I spent implementing C# 8.0 support was spent on manual and automated testing.

The reason everything sort-of just works is that at the IL level, at which PostSharp operates, nullability annotations are represented as hidden attributes.

The hidden nullability attributes of C# 8

When you type string? myField in C#, compile it and then decompile with a pre-C# 8 decompiler, you will get [System.Runtime.CompilerServices.NullableAttribute(2)] string myField;. The NullableAttribute is one of two new "hidden" attributes. Its parameter determines the nullable state of the type:

  • 0 means null-oblivious (pre-C# 8 or outside a nullable annotation context),
  • 1 means non-nullable (e.g. "string"), and
  • 2 means "may be null" (e.g. "string?").

The use of attributes in this way means that old code created before C# 8 can use C# 8 code. For us, it also meant a lot less work. After all, PostSharp is all about attributes so we're well equipped to handle those.

That said, there were a couple of corner cases that we wanted to solve. They may seem a little convoluted. Certainly most of our users will never encounter them. But they existed and we never want to ship a product with known defects. We've been burned by that before when it forced us to make backwards-incompatible changes later to fix the defects.

So here's one such edge case:

Edge case 1: Nullability of methods introduced with [IntroduceMember]

PostSharp has an attribute called [IntroduceMember] which you can use to insert a property or a method into another class, like this:

[SelfDescribing]
public class Creature
{
  public string? Description { get; set; }
  public int? MaximumAge { get; set; }
} 
[Serializable]
public class SelfDescribingAttribute : InstanceLevelAspect
{  
  [ImportMember(nameof(Creature.Description))]
  public string ImportedDescription;
  [IntroduceMember]
  public string DescribeSelf()
  {
      return "I am " + ImportedDescription;
  }
}

In the example above, the SelfDescribingAttribute adds the method DescribeSelf to the target class Creature.

Now, PostSharp modifies the binary, not the source code, so you won't actually be able to use the method in this project or in projects in the same solution (because project references refer to source code, not the binary). That is why this feature is used mostly to add methods expected by frameworks (the most notable case being XAML/PropertyChanged).

However, if somebody else imports your projects as a DLL library (either by referencing the .dll file itself, or as a NuGet package), they will see the introduced method. From their perspective, the class would look like this:

public class Creature
{
  public string? Description { get; set; }
  public int? MaximumAge { get; set; }
  public string DescribeSelf();
} 

But what then is the nullability of the return type of the method DescribeSelf— is it non-nullable (string) or nullable (string?). By the principle of least surprise, we felt the correct answer is "the same as in the template method", which here means non-nullable, so that's what we do — we make sure the metadata on the introduced member reflects that.

But if we did nothing (by not copying any attributes from the template method onto the target method), then in this case, the answer would be nullable. Why? Because the C# compiler doesn't just use NullableAttribute to mark which values are nullable, it also saves up on the assembly size by compacting several NullableAttributes into a single NullableContextAttribute.

The exact algorithm is described in the nullable-metadata.md file but it's along the lines of "if a class has more nullable members than non-nullable members, annotate only the non-nullable members with NullableAttribute and mark the class itself as nullable using NullableContextAttribute". The class would look like this: 

[Nullable(0)] // Class itself doesn’t have a nullability
[NullableContext(2)] // Members are nullable.
public class Creature
{
  public string Description { get; set; } // Inherits nullability from class
  public int MaximumAge { get; set; } // Inherits nullability from class
  public string DescribeSelf(); // Oops, this was supposed to be non-nullable.
} 

Now you may already see the problem. The target class was considered entirely nullable, but now we're introducing a method with a non-nullable return type to it. Therefore, we must take care to copy (or create, if necessary) proper attributes on any introduced methods (and the methods' parameters and return values) and on any introduced properties and events.

That's why the final class, as modified by PostSharp, would look like this if decompiled:

[Nullable(0)]
[NullableContext(2)]
public class Creature
{
  public string Description { get; set; }
  public int MaximumAge { get; set; }
  [NullableContext(1)] // The method’s return value and any parameters are non-nullable
  public string DescribeSelf();
} 

Edge case 2: New attributes on fields

Let's move to another edge case, one that occurs when you add one of the new nullability custom attributes (AllowNull, DisallowNull, MaybeNull, …) to a field or property.

These attributes allow you to express the nullability semantics of properties and methods that are more complex than "always may be null" or "is never null". For example, you may put the new [AllowNull] attribute on a property which is otherwise non-nullable. That is a note to the compiler that "null is allowed to be put into this property, but, since this is a non-nullable property, it will never return null".

Here's how you might use such an attribute:

[AllowNull, // <- a C# 8 attribute
UseRandomNameIfNull] // <- an example LocationInterceptionAspect, explained further down
public string Name { get; set; }

The AllowNullAttribute combined with the fact that the property type (string) is non-nullable in C# 8, means that this property's nullable status can be explained in English like this: "I never return null, but feel free to assign null to me. I'll handle it."

In vanilla C#, you could handle it by implementing the property's getter to return something instead of null, or by implementing its setter to assign something to a backing field. With PostSharp, you can do the same thing with a LocationInterceptionAspect:

[Serializable]
public class UseRandomNameIfNullAttribute : LocationInterceptionAspect
{
  private string name;
  public override void OnGetValue(LocationInterceptionArgs args) 
  {
    args.ProceedGetValue();
    if (args.Value == null) {
      if (name == null) {
        name = R.GenerateRandomString();
      }
      args.Value = name;
    } 
  }
}

But with PostSharp, you can also apply location interception aspects to fields, like this:

[AllowNull, UseRandomNameIfNull]
public string Name;

When I did that while testing PostSharp with C# 8, everything seemed fine — until I loaded the PostSharp-modified assembly in another project, at which point Visual Studio started complaining that I'm assigning a null to a non-nullable property. But why? Did I not use the AllowNull attribute properly?

Well, the way PostSharp makes location interception aspects work for fields is by transforming them into auto-implemented properties. That means that the IL code, decompiled, would look more like this:

[AllowNull, UseRandomNameIfNull]
public string Name { get; set; }

That seems just like the previous declaration, which works without a hitch. The problem is that the C# 8 compiler, when it sees [AllowNull] or [DisallowNull] on a property, silently moves those attributes to the property setter's parameter (the keyword "value"). A similar thing happens for [MaybeNull] and [NotNull]: those attributes get moved by the C# compiler onto the getter's return value.

This was a surprise to me. There being no specification, I looked in GitHub history but found little. I do remember that I found a single comment about this on some related issue (I lost it) and I know that this wasn't yet decided in May 2019, but that's it. Either way, emulating Roslyn helped, so PostSharp now does what the C# compiler does and moves these attributes onto their appropriate places if you declare them on a field.

Conclusion

As you can see, implementing support for C# 8.0 was not completely obvious, but we managed to address the corner cases anyway. But wow, do I miss the days that C# had a complete specification :). I completely empathize with the commenter at issue 64.

And there's still work to be done. PostSharp is not only a code weaver, but also a collection of libraries (such as PostSharp Logging or PostSharp Caching). As a library producer, we will eventually want to annotate our public API with question marks and the new attributes. I'm actually looking forward to this: going through all public types and methods and investigating their nullability sounds fun. Until then, even though our API remains null-oblivious, our code weaver handles code with nullable reference types well.


An Android app with PostSharp

$
0
0

I created and published my first Android app: an initiative tracker for pen-and-paper roleplaying games. If you have an Android tablet, you can download it off Google Play and try it out. As opposed to my earlier attempts, I now used Xamarin with XAML and PostSharp (the product we make). In this article, I walk through my experiences.

Soothsilver Initiative Tracker

The app looks like this:

In pen-and-paper roleplaying games, such as Dungeons & Dragons, in combat, players take turns in the order of initiative: a number based on what each player rolls on a die. Because each player takes several turns in a combat, and the order of players is different each combat, playgroups make use of an initiative tracker to keep track of whose turn it is, who goes next and so on.

As you might expect, there are many initiative trackers on Google Play already. I know because I tried them all — but not one that matched all of my requirements exactly, so, of course, I rolled out my own :).

And it worked well — I made it within half a day (though the subsequent process of publishing it on Google Play took about as many hours again) — and our playgroup is still using it four months later.

I should mention that is entirely my own personal hobby project — I host in on my own GitHub (give me a star) and PostSharp as a company certainly does not provide support (but feel free to ask me). We don't even officially support Android at the moment, though as evidenced by this app, it generally works, including debugging.

Using Xamarin with XAML

I made some attempts at Android development before, using Java/Kotlin and native Android API, but I often got stuck and never created anything worth publishing. That's why this time I tried if maybe using C#/.NET would work better.

It did, curiously, but it still wasn't close to desktop development. You know how when you're developing a desktop application and you type in some code and press F5 and it immediately just runs? And how when you're developing a mobile application and you need to install several prerequisites and check manifests and set configuration options? That's still there, even when you're using Xamarin.

There's also the fact that everything is slower: the designer, the build, the transfer to your device or to the emulator, even the distribution process and upload to Google Play. I think if I had more experience, it would go faster, but I still found the entire process more complicated. Part of the reason why I don't create mobile games.

The user interface design, at least, was more familiar. I used Xamarin Forms with XAML design files and the paradigm is very similar to UWP and WPF so I created my screens quickly. Visual Studio now even supports live updates: I modified my XAML, saved the file, and the form on my device changed!

Of course, XAML also means INotifyPropertyChanged, the XAML solution to data binding.

INotifyPropertyChanged

I've used INotifyPropertyChanged in several projects so far and I still don't quite like it.

Personally, as far as binding values to user interface elements goes, my number one favorite approach is the low-level draw loop (as in XNA/MonoGame): you ask your model sixty times a second what are you supposed to draw and you draw that. That isn't an option in complex UI frameworks.

My next favorite approach is the one taken by JavaFX: you use special smart properties (wrappers) in your model and bind them among themselves. I find that clearer, less boilerplatey, but it's getting complex, especially with the larger elements like list views.

Regardless, Xamarin has XAML, and XAML means MVVM and INotifyPropertyChanged. Fortunately, this time, I have PostSharp.

PostSharp auto-implements everything about INotifyPropertyChanged. With it, I was able to reduce the interacting part of my model to this:

[NotifyPropertyChanged]
[Recordable]
public class Creature : INotifyPropertyChanged
{
  public string Name { get; [ThenSave] set; }
  public bool Friendly { get; set; }
  public int Initiative { get; set; }

  [SafeForDependencyAnalysis]
  public bool Active
  {
    get { return MainPage.Instance.Encounter.ActiveCreature == this; }
  }

  [SafeForDependencyAnalysis]
  public Xamarin.Forms.Color BackgroundColor
  {
    get
    {
      if (Friendly) { return Color.PaleGreen; } else { return Color.LightSalmon; }
    }
  }
  ...
}

PostSharp's solution to INotifyPropertyChanges is magic so I'll walk you through what's happening here.

The [NotifyPropertyChanged] attribute is a PostSharp attribute and it means "whenever any property of this class changes, raise a PropertyChanged event for that property". For the properties Name, Friendly and Initiative, it means the event is raised after some code uses their setter.

But what about the properties Active and BackgroundColor which are getter-only?

The property MainPage.Instance.Encounter.ActiveCreature is referenced via a static property (MainPage.Instance) so there's no way to react to its change from within the class Creature. What I do in this app is that I use OnPropertyChanged to manually raise the PropertyChanged event for the property Active for all creatures whenever the active creature changes. PostSharp can't help here because of the unfortunate way in which I set this up. What I should have done instead is add to the Creature class a reference to its parent Encounter, which removes the need to refer to a static property.

For BackgroundColor, the situation is different. The property's value depends only on another property of the same creature: whether it's friendly or not. PostSharp can determine this (it reads the IL code of BackgroundColor's getter and sees that it references the property Friendly) and makes it so that whenever the value of Friendly changes, a PropertyChanged event is raised also for BackgroundColor — and I didn't need to write any code.

The [SafeForDependencyAnalysis] attribute in my code sample is a signal to PostSharp that even though the code within uses static properties, it's okay and PostSharp shouldn't emit a warning. PostSharp normally emits the warning to tell the user "hey, I can't automatically raise events in response to changes of static properties; are you sure you're handling that yourself?". It's necessary even for BackgroundColor because it refers to Color.PaleGreen and Color.LightSalmon, and those aren't constants, they're merely static fields. (They are readonly, so maybe the warning isn't really necessary and we could look into suppressing it.)

Other uses of PostSharp

You may have noticed a couple of extra attributes in my code sample.

Those weren't strictly necessary but since I was using PostSharp already, I figured, why not go all the way.

[ThenSave] is a method boundary aspect I created. It means, "after this method completes, save all creatures on disk so they're not lost when the user closes the app". I could have done this instead:

private string _name;
public string Name { get => _name; set { _name = value; ThenSave.SaveEverything(); } }

Which would do the same thing, but I feel like the solution with the [ThenSave] aspect is prettier and if I needed it for more than one property, it would also help save lines of code.

The last attribute I didn't talk about is [Recordable]. This one is a built-in PostSharp attribute which means "I remember everything that happens to me; you can use undo/redo."

Normally, when you implement undo/redo, for each possible action the user can take, you create a triplet: what happens when you take the action, what happens when you undo it, and what happens when you redo it.

PostSharp's undo/redo makes use of the fact that most of the time, everything that these actions do is just changing the values of some properties. So, whenever you change the value of some properties of a [Recordable] object, the object remembers it and it's added to the undo stack which I exposed with the Undo and Redo button on screen.

I also marked the list of all creatures (the class Encounter) as [Recordable] as well. That way, if I first change the name of one creature in the encounter and then another, both changes are made to the same undo stack and can be undone in turn by the same button.

I suppose the app didn't really need an undo/redo functionality. Most initiative trackers on Google Play don't have one. But with PostSharp, it cost me very little to add it and it actually proved useful, especially since typing is more time-consuming on tablets so undoing a mistaken name change rather than retyping the original name helps speed up gameplay.

Publishing on Google Play

I didn't technically need to publish the game: I mostly wanted it for myself and the build that Visual Studio placed on my tablet was good enough. But publishing seemed simple enough and maybe in the future I'll want to create a true game and this could be a trial run.

It wasn't actually all that simple. The Google Play Console, the online app that you use to upload and manage your Google Play games, has about twenty pages that you can fill in and about half of them are mandatory.

Uploading the binary itself failed several times for me when I checked the wrong checkboxes when creating the distribution build in Visual Studio. Visual Studio has a feature that allows you to upload your build to the Play Console from Visual Studio, but it broke at some point in the past and doesn't seem to work anymore. Still, for every problem I faced, there was a forum thread somewhere on the internet that pointed me in the right direction.

The cost of creating a Google developer account was just $25 and it's a one-time fee that I don't need to pay for future apps.

Whenever I made a change to the game data, either uploading a new build or even fixing a typo in the store listing description, it required a new verification from Google, which seems to be manual at least the first time around: it took about a week to get my first version approved (but subsequent approvals were faster). I didn't actually talk or chat with a human at any point.

Overall, it was a good experience, though I recommend that you do this on a powerful computer (because creating the distribution build is slow) and with good internet (because the distribution package for Google Play is large and you need to upload it).

Conclusion

I'm happy with how my app turned out.

I used the free Essentials edition of PostSharp (more than enough for a little app like this). As an employee, I had access to the full Ultimate edition but I didn't need it. (You may also be eligible for a free Ultimate license.)

If I didn't use PostSharp, I certainly wouldn't have implemented Undo/Redo and my Encounter and Creature classes would increase in size and complexity a bit but with a bit of refactoring, it would still be quite manageable, I think. That said, XAML is a technology where PostSharp shines and I certainly prefer the current code than what it would have been without PostSharp.

Announcing PostSharp 6.5 RC: Performance, Docker Support and More

$
0
0

We're happy to announce the release of PostSharp 6.5 RC, available for download on our website.

Most of our efforts with PostSharp 6.5 went into improving the build-time and design-time experience of PostSharp. We also now proudly and officially support Docker, after we have successfully tested our product with all Docker images of .NET Core provided by Microsoft.

This release contains the following improvements:

  • Performance enhancements
  • Installer improvements
  • Docker support
  • Platform updates

 

Performance enhancements

The startup time of PostSharp on .NET Core has decreased significantly.

The decrease is from 1,100 ms to 400 ms. That is a 2.5 times improvement. Building a lot of small projects should now be significantly faster. To achieve this improvement, we generate ReadyToRun images of PostSharp on the fly on each build machine. This feature can be disabled by setting the PostSharpReadyToRunDisabled MSBuild property to True.

Therefore we are glad to announce that PostSharp performance on .NET Core is now on a par with the one on .NET Framework!

 

Emitting errors and warnings is now significantly faster.

We removed an overhead of approximately another 400 ms on the first message and further improved the time to emit subsequent messages. The expensive part of emitting a message was to determine in which file and on which line the message should be anchored, and this is what we worked on.

Previously, on the first message, PostSharp loaded Roslyn to parse the source file and tried to locate the line and column of the offending code element. Loading Roslyn was a significant one-time overhead unless native images were present, and even then the cost of parsing was linear to the number of offending files - which could grow high if there was a lot of warnings. Now, we are using a Roslyn analyzer to export the location of all code elements (you will find a new pspdb file in your output directory). This analyzer resides in the Roslyn process itself and uses the already-parsed Roslyn code model, therefore it is very fast. This approach also allows us to find the source code of types with no method body at all, such as interfaces or enums, which previously we were not able to do. The new strategy causes a performance loss when there is no warning at all, but usage data shows that only a minority of projects would be negatively affected.

The analyzer can be disabled by setting the PostSharpRoslynAnalyzerDisabled property to True, but in this case errors and warnings will not be resolved to a source code location.

 

PostSharp Tools for Visual Studio is now even smoother.

PostSharp tools for VS is significantly better, since we continued to apply the async pattern everywhere. We also optimized memory usage so you should get a better experience with large solutions.

 

Installer improvements

The installer now lets you:

  • choose which instances of Visual Studio PostSharp should be installed into,
  • kill blocking processes, and
  • easily see the installation log in case of failure of VsixInstaller.exe. 

 

Docker support

We've tested PostSharp on Docker thoroughly and fixed a few issues with thin Windows images.

 

Platform updates

  • We added support for the modern Microsoft.Extensions.Caching.Memory.IMemoryCache interface of .NET Core and Microsoft.Azure.ServiceBus, the new API for Azure Service Bus.
  • Visual Studio 2017 RTM (15.0) is no longer supported. It is replaced by Visual Studio 2017 Update 1 (15.9, LTS).

 

 

Summary

In PostSharp 6.5 we focused on two areas: improving the build-time and design-time experience of PostSharp. 

We are happy to say that, in line with our previous announcements about support for .NET Core, PostSharp performance on .NET Core is now on a par with the one on .NET Framework.

As always, it is a good time to update your VS extension and NuGet packages, and report any problem via our support forum

 

Happy PostSharping!

Announcing PostSharp 6.5 LTS: Performance, Docker Support and More

$
0
0

We are excited to announce the general availability of PostSharp 6.5 and give you a brief summary of new features and improvements. You can download PostSharp 6.5 from our website here.

With this release we made a significant improvements to build-time and design-time performance of PostSharp.

What is worth mentioning is that PostSharp 6.5 is LTS (Long Term Support) release, meaning that this release is supported for 5 years after the general availability date or one year after we publish the next LTS (the same policy as .NET Core LTS). Also PostSharp 6.5 marks an important milestone for us, as we can finally proudly say that .NET Core is now measured up to .NET Framework.

 

Here is the summary of all great improvements and features in PostSharp 6.5:

  • Support for Docker - we have successfully tested our product with all Docker images of .NET provided by Microsoft.
  • Build-Time performance enhancements: up to 2x faster - the startup time of PostSharp on .NET Core has decreased significantly and is now on par with the one on .NET Framework.
  • PostSharp Tools for Visual Studio performance improvements 
  • Support for IMemoryCache and the new Azure Service Bus API in Caching - we added suport for the modern Microsoft.Extensions.Caching.Memory.IMemoryCache interface on .NET Core and Microsoft.Azure.ServiceBus, the new API for Azure Service Bus.

We recommend checking out the RC announcement for more details.

Happy PostSharping!

PostSharp’s operations during COVID-19

$
0
0

Just a quick summary of how we approach COVID-19 situation. 

Business Continuity

As most of other companies in our industry, PostSharp team is remote now and is operating at full speed. Our recent migration to a cloud-only company made the transition even easier.

We have taken precautionary measures to reduce useless exposure to risks, improve the remote work experience, and keep our mental sanity.

At the time of writing this article, we don’t see any reason for interruptions in business and support operations of PostSharp, and we expect everything to keep running as usual, including our technical support and customer assistance

Community Support

We believe it is a double privilege to be able to continue business as usual and not exposing ourselves to hygienic risks. We believe everybody should contribute according to their abilities, therefore we have decided to provide 20% time to our employees for community support related to COVID-19 crisis.

One of our teammates had already put our 3D printer in good use, working hard on printing face shields and donating them to people who need it most. Our production capacity is now 12 units per day – a small drop in the hundreds being produced daily by the community.

You can watch our printer in action in this 24/7 live stream. Here’s a picture of our set up, supervised 24/7 by our mascot, the debugging duck.

 

 

If you have access to a 3D printer and want to contribute, you can download the spec and follow this tutorial prepared by Prusa3D in cooperation with the Czech Health Ministry.

 

Stay safe and should you have any questions, please reach out to us at hello@postsharp.net.

 

 

PostSharp 6.6 Preview: Build low-level add-ins with PostSharp SDK – for free

$
0
0

Starting from PostSharp 6.6, we’re giving our users the keys to a secret chamber that we’ve previously kept for ourselves: the realm of low-level MSIL development using PostSharp SDK, the layer on which high-level components such as PostSharp Aspect Framework are built.

Best of all: this is going to be free. We are launching a new edition called PostSharp Community that will surpass the old PostSharp Essentials in terms of free features. Not only will it give you access to the lowest layers of PostSharp SDK for free, but also to OnMethodBoundaryAspect, MethodInterceptionAspect and NotifyPropertyChanged for simple cases – and to Contracts.

Our commercial approach: high quality & high abstraction

Since PostSharp 2.0, our mission has been focused on two points: high quality, and aspect-oriented programming.

  • Our emphasis on high quality meant that we spent much effort on engineering (at the risk of over-engineering, sometimes), testing, backward compatibility, robustness, documentation, or continuous delivery. High quality came at a high cost and lower agility. High quality is a win-win: customers are more productive (our ultimate mission), and we can spend less time on support. The proof: we’ve always been able to keep our support time under 20 hours a week in average to support thousands of customers.
  • Our second mission, aspect-oriented programming, meant that we only required our users to have a standard knowledge of .NET. We designed our APIs in such a way that a developer would write correct code “by default” even without reading the documentation. We used abstractions that were similar to the ones most .NET developers were already exposed to, and we considered it our job to bridge the abstraction gap between human thought and MSIL. Since our ultimate vocation is to bridge the gap between human thought and C#, we found it counter-productive to cause even lower-level thinking. In a nutshell, PostSharp was not designed for hackers.

This strategy has been tremendously positive in the last years. Startups and corporates relied on PostSharp to reduce boilerplate, compress development costs and improve long-term maintainability. But it had a cost, too, and we had to reflect this cost in our price list.

Our community approach: lower friction & low abstraction

On the other side, the success of some open-source projects showed that there was a need in the community for a free and low-abstraction solution even at the cost of a lower level of support, testing, and documentation. With PostSharp 6.6, we would like to address this need by opening a community initiative to build add-ins based on PostSharp SDK, our platform for MSIL manipulation.

These community add-ins will be developed on GitHub under a MIT open-source license. They will not be subject to our commercial standards of quality and standard processes, therefore they will also cause less friction. The downside of this strategy is that we expect their quality to be lower than that of PostSharp itself, and therefore we will not provide commercial support for the community add-ins.

To make sure these add-ins are available for free for the community, we are providing free access to the lowest layers of PostSharp SDK. Access to this platform is provided AS IS, without support (even to commercial customers), and with a much lower documentation standard than our commercial products. That said, PostSharp has been publicly available since 2005, and PostSharp SDK is exactly the platform we’re relying on for the upper layers of our product, so we trust its reliability is very high.

PostSharp community add-ins

We have been already working on a couple add-ins. We’ve borrowed them from three sources:

  • adding our own add-ins, written from scratch;
  • releasing existing but internal works;
  • porting open-source add-ins developed for other MSIL stacks such as Fody.

You can find the work in progress at https://github.com/postsharp:

What else is free in PostSharp Community?

Let’s face it, there were open-source alternatives for a few of the most basic but most useful features of PostSharp. We found it redundant to port these add-ins to PostSharp SDK where they’re already supported with top quality and documentation, so we included the following features for free in PostSharp Community:

Our objective is to make PostSharp your one-stop solution for assembly transformation. Since there are often incompatibilities between IL weavers, it’s better if you can have just one. And we want it to be PostSharp. You can use the free features forever, or you can upgrade to a commercial edition.

Additionally, with PostSharp Community you can use all premium features of PostSharp, but only on a limited project size. We’ll blog later about this possibility.

How to create a PostSharp add-in?

The best way to get started is to look at PostSharp.Community.HelloWorld.

Arguably, the documentation is still very basic, but you can find a few directions here and in the HelloWorld readme file, and you may want to look at the PostSharp SDK class reference.

If you plan to release your add-in as open-source, you’re welcome to join our Slack community channel and ask for help. Please note we don’t have the capacity to provide support to PostSharp SDK to all users and will focus our help on open-source contributors.

Summary

At PostSharp we’ve always focused on high-quality, high-abstraction and well-engineered and, let’s face it, high-priced solutions – but we’ve neglected the users who needed low-level access to assemblies, were more sensitive to financial costs, but less demanding in terms of quality.

With PostSharp 6.6, we’re introducing PostSharp Community, a free edition of PostSharp that gives access to community add-ins, simple features of PostSharp, as well as a limited usage of premium features. We’re also releasing 5 community add-ins under the MIT license. We’re grateful to the Fody open-source community for the possibility to port their add-ins to our platform.

Our focus on quality and engineering remains, but we’re opening the door to low-level and low-friction development.

You too can now create your own add-ins with PostSharp SDK, and choose to release them as open source, or keep them private. Your choice.

 

Happy PostSharping!

-gael

How to revolutionize security, during your free time

$
0
0

When you need to address a very urgent need, but you do not have the time to address it, adopting the right tools becomes very important. The author of this post, Simone Curzi, is a Principal Consultant from Microsoft Services, and this post introduces his story and how he has succeeded in making his vision real, in his spare time, thanks to PostSharp.

Intro

Let me introduce myself. I am Simone Curzi, a Principal Consultant from Microsoft Services. I am an Application Security and Threat Modeling expert, and as such, I have the opportunity and privilege to help a lot of organizations to be more secure. As a former developer and architect, my style is to seek every opportunity for improvement and to do what I deem necessary to achieve my goals. On that account, I am not shy about investing my free time and resources, and I have done that two or three times in the past years.

How the journey began

About four years ago, a group of Threat Modeling experts, which I am honored to belong to, faced a dilemma. While our experience grew, it was clear that the available tool to perform Threat Modeling was starting to show significant limits. It was focused mostly on the identification of the risks, without providing an adequate platform to identify the most relevant countermeasures and to create a proposed roadmap, which could be used by the development teams to define their remediation plans. Moreover, creating documentation was an entirely manual process, requiring a lot of work to represent the information in the most useful way, and the incorporation of even simple feedback involved a lot of effort. No, that was not an approach we could rely on for long, not if we wanted to improve and provide a better service. We continued for a while doing our job, trying to address this issue, but all the different options we tried ultimately failed. So, what to do?

At the end of 2017, I decided to try a different route. I knew what I wanted from a Threat Modeling tool; what I missed was the tool itself. At the time, my vision was a little simplistic because the old experience was still at the center of it, but I already had some goals and ideas in mind. Armed with these ideas, I started developing a tool in my spare time, which I was able to share internally in its early incarnation to some colleagues in February 2018.

The first release was little more than a design tool with specialized functions. Still, the main ideas were already there, including a sound object model based on standard terminology and essential concepts missing from the previous experience and a composable experience built as a blend of multiple functionalities that the user can select to get the tool she needs.

Figure 1 – The Threats Manager Platform in action.

The experience has then evolved, including advanced reporting capabilities and functionalities to design Roadmaps to help the Development Teams to understand how to mitigate the identified risks. With the Roadmap view, it is possible to simply drag & drop the identified mitigation in the respective phase of the roadmap, to see the effect on the estimated residual risk. The resulting experience is integrated and straightforward, and allows even to understand that a specific combination of activities would allow reaching an acceptable residual risk after the Mid Term phase of the roadmap, as shown by the example below.

Figure 2 - The Roadmap tool and an example of mitigation planning.

From now on, the sky is the limit. Introducing advanced functionalities like the integration with Issue Tracking and Agile Planning systems like Jira and Azure DevOps is only a matter of time.

The tools of the trade

Building Threat Manager would not have been possible, if not for selected few third-party libraries and tools. One of them was PostSharp, which I did know for having used it on some other personal projects I worked on in the past. When I started Threats Manager, PostSharp was not much for me: I did use it just for a couple of simple scenarios:

  • to inject parameter validation code, using Contracts like NotNull and Required,
  • to automatically verify if a class has been correctly initialized, using a custom OnMethodBoundaryAspect intercepting the method entry and exiting automatically if the object has not been initialized, yet,
  • to propagate the Dirty status from the objects to the whole document, with an attribute that would allow marking the context as Dirty automatically, as soon as some property is set.

For example, the InitializationRequired Aspect would be applied to methods and properties that shall not be executed if the containing object has not been correctly initialized. It would be particularly useful, for example, if the object needs to be initialized after its creation, with a method like Initialize or Open.

[PSerializable][ProvideAspectRole("Initialization")][AspectRoleDependency(AspectDependencyAction.Order,AspectDependencyPosition.Before,StandardRoles.Validation)][LinesOfCodeAvoided(2)]publicclassInitializationRequired:OnMethodBoundaryAspect{privatebool_isDefaultValueInitalized;privateobject_defaultValue;publicInitializationRequired(){_isDefaultValueInitalized=false;_defaultValue=null;}publicInitializationRequired(objectdefaultValue){_isDefaultValueInitalized=true;_defaultValue=defaultValue;}publicsealedoverridevoidOnEntry(MethodExecutionArgsargs){if(args.InstanceisIInitializableObjectinitializableObject&&!initializableObject.IsInitialized){if(_isDefaultValueInitalized)args.ReturnValue=_defaultValue;args.FlowBehavior=FlowBehavior.Return;}}}

Figure 3 - The InitializationRequired Aspect to check if the object is initialized.

The class to use the InitializationRequired Aspect would, therefore, be structured as follows:

publicclassLink:ILink,IInitializableObject{privateIDataFlow_dataFlow;protectedIThreatModel_model{get;set;}publicLink(){}publicLink([NotNull]IDataFlowdataFlow):this(){_dataFlow=dataFlow;_model=dataFlow.Model;_associatedId=_dataFlow.Id;}publicboolIsInitialized=>Model!=null&&_associatedId!=Guid.Empty;privateGuid_associatedId;publicGuidAssociatedId=>_associatedId;[InitializationRequired]publicIDataFlowDataFlow=>_dataFlow??(_dataFlow=_model?.GetDataFlow(_associatedId));[InitializationRequired]publicILinkClone(ILinksContainercontainer){Linkresult=null;if(containerisIThreatModelChildchild&&child.ModelisIThreatModelmodel){result=newLink(){_associatedId=_associatedId,_model=model,_modelId=model.Id,};this.CloneProperties(result);container.Add(result);}returnresult;}publicIThreatModelModel=>_model;}

Figure 4 - An example of a class using the InitializationProvider Aspect, extracted from the Threats Manager sources.

We are talking about tasks that are very easy to achieve with PostSharp, but that didn't necessarily justify the investment. Even if that was true, it was also true that I could not develop my tool during work time, but only after or before it. This project was a matter of passion, something that has not been sanctioned by Microsoft and thus a personal investment. From that point of view, the even limited value provided by PostSharp at the time was still important to me, because, with the other third-party tools I adopted, it allowed producing the first version in a few months.

Another essential advantage I have gained with PostSharp has been associated with the cleanness of the code. The adoption and enforcement of common patterns through the project makes it simple for me to maintain the solution and make changes over time. One of such examples is represented by the functionality to handle the Dirty status. Initially, my idea was to have a class aptly named Dirty, to maintain the Dirty status for the whole process. Now, I am in the process of associating the Dirty status to all classes in my object model. I'm not there yet, but I may have complete the migration when you read this post. If you are curious, start from AutoDirtyAttribute.cs, which represents the main attribute I have written to support the Dirty status. Without PostSharp, this migration would have been very involved and would have required days. With PostSharp, I've completed it in less than four hours of work.

More complex scenarios: the PropertiesContainerAspect

While my project grew, I found myself working again and again on the same code. For example, my object model provides an extensibility feature which allows to dynamically define and associate metadata to almost any object, as collections of Properties. When I introduced this concept, I already had several different classes to represent the Threat Model itself and its entities. Therefore I had to modify each one of them, adding the same implementation to make them containers of Properties. I knew that my approach was not the most efficient; it was even a very well documented worst practice in most development books. But what else to do? Of the various options available, nothing was ideal. I even started to think about adopting the good old C++, to use its multiple class inheritance, which would be like burning your own house because the air conditioning is defective and cannot be turn off. No, I liked my home more than I suffered for the cold.

At a certain point, I found myself adding yet another behavior, again the same code replicated to a dozen or so of classes. At this point, the approach was not manageable anymore, and I decided that I needed to do something for it.

Fortunately, I found a great option in PostSharp, which was already part of my project! After having explored some ideas, I opted to create a set of specialized aspects, each one of them implementing a specific behavior:

I now have a properties container aspect, which I apply to all the various classes that need to become containers of properties: this Aspect is a class that contains the implementation of the members of the interface, which represents in my system the container of properties.

[PSerializable]publicclassPropertiesContainerAspect:InstanceLevelAspect{#regionExtraelementstobeadded.[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=1,Visibility=Visibility.Private)][CopyCustomAttributes(typeof(JsonPropertyAttribute),OverrideAction=CustomAttributeOverrideAction.MergeReplaceProperty)][JsonProperty("properties")]publicList<IProperty>_properties{get;set;}[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=2,Visibility=Visibility.Private)]publicvoidOnPropertyChanged(IPropertyproperty){if(property==null)thrownewArgumentNullException(nameof(property));if(InstanceisIPropertiesContainercontainer)_propertyValueChanged?.Invoke(container,property);}#endregion#regionImplementationprivateAction<IPropertiesContainer,IProperty>_propertyAdded;[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=6)]publiceventAction<IPropertiesContainer,IProperty>PropertyAdded{add{if(_propertyAdded==null||!_propertyAdded.GetInvocationList().Contains(value)){_propertyAdded+=value;}}remove{_propertyAdded-=value;}}privateAction<IPropertiesContainer,IProperty>_propertyRemoved;[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=6)]publiceventAction<IPropertiesContainer,IProperty>PropertyRemoved{add{if(_propertyRemoved==null||!_propertyRemoved.GetInvocationList().Contains(value)){_propertyRemoved+=value;}}remove{_propertyRemoved-=value;}}[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=1)]publicIEnumerable<IProperty>Properties=>_properties?.AsReadOnly();[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=3)]publicIPropertyGetProperty(IPropertyTypepropertyType){if(propertyType==null)thrownewArgumentNullException(nameof(propertyType));return_properties?.FirstOrDefault(x=>x.PropertyTypeId==propertyType.Id);}[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=20)]publicIPropertyAddProperty(IPropertyTypepropertyType,stringvalue){// Please refer to [https://github.com/simonec73/threatsmanager/blob/master/Sources/ThreatsManager.Utilities/Aspects/Engine/PropertiesContainerAspect.cs](https://github.com/simonec73/threatsmanager/blob/master/Sources/ThreatsManager.Utilities/Aspects/Engine/PropertiesContainerAspect.cs) for the actual implementation.returnnull;}[IntroduceMember(OverrideAction=MemberOverrideAction.OverrideOrFail,LinesOfCodeAvoided=12)]publicboolRemoveProperty(IPropertyTypepropertyType){if(propertyType==null)thrownewArgumentNullException(nameof(propertyType));boolresult=false;varproperty=GetProperty(propertyType);if(property!=null){result=_properties?.Remove(property)??false;if(result){if(InstanceisIDirtydirtyObject)dirtyObject.SetDirty();if(InstanceisIPropertiesContainercontainer)_propertyRemoved?.Invoke(container,property);}}returnresult;}#endregionprivateIThreatModelGetModel(){IThreatModelresult=null;if(InstanceisIThreatModelChildmodelChild)result=modelChild.Model;elseif(InstanceisIThreatModelmodel)result=model;returnresult;}}

Figure 5 - A simplified implementation for the PropertiesContainerAspect.

Then I have to apply the same interface to all my classes in need of being property containers, create a default implementation of the said interface, adding eventual additional placeholders required by the Aspect, and that's it!

[Serializable][PropertiesContainerAspect]publicclassProcess:IProcess{publicProcess(){}#regionDefaultimplementation.publicGuidId{get;}publicstringName{get;set;}publicstringDescription{get;set;}publiceventAction<IPropertiesContainer,IProperty>PropertyAdded;publiceventAction<IPropertiesContainer,IProperty>PropertyRemoved;publicIEnumerable<IProperty>Properties{get;}publicIPropertyGetProperty(IPropertyTypepropertyType){returnnull;}publicIPropertyAddProperty(IPropertyTypepropertyType,stringvalue){returnnull;}publicboolRemoveProperty(IPropertyTypepropertyType){returnfalse;}#endregion#regionAdditionalplaceholderrequiredbytheAspect.privateList<IProperty>_properties{get;set;}#endregion}

Figure 6 - A class using the PropertiesContainerAttribute.

Now, when I have to modify the code, I go directly to my Aspect class and perform the changes I require, instead of having to change every class implementing the interface. That's quite an improvement, in my book! The net effect has been to simplify maintenance and to allow the creation of much better code, and most importantly, the creation of a more robust solution in less time.

Open-source license

I know now that I can rely on PostSharp to help me with my endeavor, but what I did not know, and I have recently learned, is that I can also rely on PostSharp Technologies – the makers of PostSharp – as partners in my initiative. I have recently published the core libraries of my tool and an SDK to extend it as Open Source: you can find them at https://github.com/simonec73/threatsmanager. PostSharp Technologies have been so kind to provide a free license, allowing contributors of this Open Source project to use the Ultimate features having only the free Community license. Thank you again, PostSharp Technologies!

I hope that my experience can be useful for you to approach this excellent tool and get even more value out of it. I know I just scratched the surface, and I have already planned to use much more of it. And I also hope that you will like my work and will decide to contribute to it to make it even better.

Just to be clear, the Threat Modeling tool I have developed, Threats Manager Studio, is not yet available for everyone to use. For now, you are limited to the Threats Manager Platform engine and to the SDK to build its Extensions.

Feel free to use the Threats Manager Platform for creating your Threat Modeling tool, or for extending the one you own, if you are one of the few players in the space. And stay tuned: new, even better things are coming!

They said it was impossible. Now it is a reality.

That's all for now. Safe Threat Modeling to everyone!

About the author

Simone Curzi is a Principal Consultant from Microsoft Consulting Services. Simone has a 20 year experience covering various technical roles in Microsoft Services, and has fully devoted himself to Security for more than 5 years. A renowed Threat Modeling and Microsoft Security Development Lifecycle (SDL) expert, Simone is also one of the leaders of the Worldwide Microsoft Community on Application Security and a SME for the Security Community.

Some of Simone’s contributions are available through his blog. He can also be reached via his LinkedIn profile.

Blazor support in PostSharp 6.7

$
0
0

Today we would like to announce that the preview of Blazor support is now available in PostSharp 6.7. Blazor is a framework from Microsoft for client-side web development using .NET and C# instead of JavaScript. If you want to learn more about the framework, visit Blazor.net.

Intro

Because PostSharp works on the IL level and conforms to the CLI specification, there’s usually little development work required on our side when adding support for a new platform. However, a lot of effort goes into work on our build configuration and automation system, to make sure that we can successfully execute all of our test suites on the target platform (and in some cases on physical devices). This was also the case with Blazor. We even had to build our own test runner based on Xunit that executes the tests within the web browser.

Overall, we’re very happy with the results of our tests: you can use PostSharp Framework and selected Patterns libraries in your Blazor applications today. Please read below for more detailed information about supported use cases.

What is supported

First of all, PostSharp supports Blazor as a runtime platform only via .NET Standard. You can use PostSharp in your .NET Standard libraries and then reference these libraries in your Blazor application project. Adding PostSharp directly to a Blazor application project is not supported.

Second, some of the Patterns libraries are not applicable to the Blazor platform and therefore are not supported. See the table below for the list of the PostSharp packages that support Blazor.

PackageSupported
PostSharpYes
PostSharp.Patterns.CommonYes
PostSharp.Patterns.AggregationYes
PostSharp.Patterns.ModelYes
PostSharp.Patterns.DiagnosticsYes
PostSharp.Patterns.ThreadingN/A
PostSharp.Patterns.XamlN/A
PostSharp.Patterns.CachingYes

Configuring the Blazor linker

By default all Blazor applications use a linker in the Release build configuration. The purpose of the linker is to discard unused code and reduce the size of the application. Linking is based on static analysis and it cannot correctly detect all the code used by PostSharp.

To prevent the linker from removing the required code you need a custom linker configuration in your project. The configuration procedure is described on the Microsoft Docs page: Configure the Linker for ASP.NET Core Blazor. Please use the following code as your linker configuration file:

<linker><assemblyfullname="netstandard"><typefullname="*"></type></assembly></linker>

Example

Let’s look at a simple Blazor application that uses PostSharp Aspect Framework. The full source code of this example is published in our samples browser. The project is based on the standard Visual Studio template “Blazor WebAssembly App”.

In this application we have the WeatherService class with the GetCurrentForecast() method that downloads weather forecast data from a server. For the purpose of the example 50% of the calls to our method fail with an exception:

publicasyncTask<WeatherForecast[]>GetCurrentForecast(){// Fail every other request.if(++counter%2==1){thrownewWebException("Service unavailable.");}returnawaitthis.httpClient.GetFromJsonAsync<WeatherForecast[]>("sample-data/weather.json");}

To make our application more resilient to server failures, we can retry the call with a short delay when the connection fails.

Using PostSharp to automate the implementation of the retry pattern we create a custom aspect AutoRetryAttribute and apply it to all our service classes.

[AutoRetry]publicclassWeatherService{// ...}

Finally, we also need to add a custom linker configuration file (LinkerConfig.xml) to our project as described above.

To see our custom aspect in action, build and run the sample application. Then click the “Fetch data” link in the left navigation bar. The data will load successfully (possibly with a few seconds’ delay).

You can also find the following log message in the web browser console:

Method failed with exception WebException 'Service unavailable.'. Sleeping 3 s and retrying. This was our attempt #1.

Summary

You can start using PostSharp 6.7 in your Blazor applications today.

Please note that Blazor support is still in preview status and some of the listed packages may have unresolved compatibility issues with Blazor.

We’re working on fixing any bugs we can find ourselves and will be happy to receive any feedback you can provide us.


Collecting logs and multiplexing

$
0
0

In PostSharp 6.7, we are releasing two new features for PostSharp Logging: log collecting and the multiplexer logging backend. Log collecting allows you to reuse your existing logging code with PostSharp. And with the multiplexer backend, you can send your logging output to two or more targets (such as console and a third-party logging framework) at the same time.

In this article, I will describe how to best use both of these new features.

Collecting logs

With log collecting, you can use your existing logging statements in greater harmony with PostSharp’s automatic logging.

What happens without log collecting?

Suppose, for example, that your codebase is using NLog to log events. Previously, you could add PostSharp [Log] attributes and have both your NLog loggers and PostSharp [Log] attributes send events to the same NLog targets. Your system looked like this:

But the resulting output wasn’t perfect. If your code was this:

[Log]publicvoidMyMethod1(){logger.Info("Manual.");}

Then your output looked like this:

DEBUG|MyNamespace1.MyClass1|MyClass1.MyMethod1()|Starting.
INFO |MyNamespace1.MyClass1|Manual.
DEBUG|MyNamespace1.MyClass1|MyClass1.MyMethod1()|Succeeded.

Notice two inconveniences: First, the text “Manual.” isn’t indented to the right, despite the fact that it’s inside the MyMethod1 method. Second, the information that the lines come from “MyClass1” is duplicated in the PostSharp [Log] entries.

Both problems have the same root cause: that PostSharp doesn’t ever process the manual logging line. The call to Info above is not intercepted by PostSharp so PostSharp can’t add the information it has about the class and about indentation. This means that your NLog formatting string needs to include the class information, and we get the ugly duplication.

There was a way around this issue: using manual logging events of PostSharp but compared to the abilities of other logging frameworks, PostSharp manual event creation might not be as easy to use, and of course, switching to it would require that you rewrite your logging code.

How log collecting can help

But with log collecting, you can set up your system differently:

Collecting logs means that when you use NLog statements, the logging events go to PostSharp instead of NLog targets. PostSharp can then enrich those logging events with its own data and send them to final NLog targets as though you used PostSharp manual logging API.

For NLog specifically, you accomplish this by using NLogCollectingTarget, our custom NLog target.

When you use log collecting, you can remove the logger name from your NLog formatting string and end up with a cleaner output from the same code, like this:

DEBUG|MyClass1.MyMethod1()|Starting.
INFO |  MyNamespace1.MyClass1|Manual.
DEBUG|MyClass1.MyMethod1()|Succeeded.

Note that indentation works now and that there is no duplication of logger names.

How to use log collecting

If you already use a combination of manual logging and PostSharp logging, and your manual logging is written in a logging framework we support, you may benefit from log collecting.

Here’s how you use it:

  1. Upgrade your PostSharp NuGet packages to the most recent 6.7 version (minimum 6.7.8).
  2. Set up log collecting for your logging framework by following our documentation. We can do log collecting for Serilog, NLog, Log4Net, Trace, TraceSource and ASP.NET.
  3. You can now use [Log] attributes according to our documentation and the logging features of your logging framework, at the same time, and still have a clean output.

Multiplexing

The multiplexer is a new logging backend that sends PostSharp logging output to two or more other logging backends.

For example, you can send all of your logging to Serilog, logging from user-relevant classes to console, and logging of errors or critical errors to a Loupe server. Multiplexing is like having two or more sinks/appenders/targets/providers in other logging frameworks.

Each “child backend” of a multiplexer may be for a different logging framework or you may have two backends for the same logging framework, but with different configuration. Both are useful in different scenarios.

The child backends are normal PostSharp Logging backends and you can still configure their options and verbosity as normal. Multiplexing works with all PostSharp Logging backends, including any backends you create yourself.

Let’s look at the code for the example I gave above. You want:

  • all log events to be sent to Serilog;
  • all events from classes in the FeedbackToUser namespace to be sent to Console;
  • all errors to be sent to Loupe.

You can do this by creating and configuring each backend separately, and then adding them all to the multiplexer backend, and setting the multiplexer as the default backend:

SerilogLoggingBackendserilog=newSerilogLoggingBackend(...serilogconfiguration...);serilog.DefaultVerbosity.SetMinimalLevel(LogLevel.Trace);// log everythingConsoleLoggingBackendconsole=newConsoleLoggingBackend();console.DefaultVerbosity.SetMinimalLevel(LogLevel.None);// don't log stuff in generalconsole.DefaultVerbosity.SetMinimalLevelForNamespace(LogLevel.Trace,"MyApp1.FeedbackToUser");// but log stuff in this namespaceLog.StartSession();LoupeLoggingBackendloupe=newLoupeLoggingBackend();loupe.DefaultVerbosity.SetMinimalLevel(LogLevel.Error);// only send Error and Critical eventsMultiplexerBackendmultiplexer=newMultiplexerBackend(serilog,console,loupe);LoggingServices.DefaultBackend=multiplexer;// send our logging events to all three backends

You can learn more about multiplexing in PostSharp in our documentation.

Conclusion

Logging code can be pervasive and difficult to change once in your codebase, but with log collecting, you don’t need to change it when you adopt PostSharp. You can supplement your existing logging with PostSharp automatic logging and they will work perfectly together.

The multiplexer enables several new scenarios, including sending your logging output to targets in different logging frameworks at the same time.

You can learn more about these new features, log collecting and the multiplexer, in our documentation.

Announcing PostSharp 6.7 RC: Support for Blazor and Xamarin, and better integration with other logging frameworks

$
0
0

We are happy to announce that PostSharp 6.7 RC is available today. Included in this release are support for Xamarin and Blazor as well as introduction of two new features for PostSharp Logging: collecting logs from other logging frameworks into PostSharp Logging, and writing from PostSharp Logging into multiple target frameworks.

This version is available for download on our website and the packages for 6.7.9-rc are now available as prerelease on NuGet.

Blazor support

Starting from version 6.7 you can now use PostSharp Framework and selected Patterns libraries in your Blazor applications.

Worth mentioning is that PostSharp supports Blazor as runtime platform only via .NET Standard. You can use PostSharp in your .NET Standard libraries and then reference these libraries in your Blazor application project. Adding PostSharp directly to a Blazor application project is not supported.

If you would like to look at a sample Blazor application that uses PostSharp Aspect Framework, you can find the full source code of an example published in our samples browser.

Find further details about Blazor support on our blog post here.

Xamarin support

As we have already announced in PostSharp 6.7 preview blog post, we are excited for bringing back Xamarin support. Just like with Blazor support, you will be able to use PostSharp in .NET Standard projects that can then be referenced in your Xamarin application project. The support includes creating custom aspects as well as using PostSharp Pattern Libraries.

Note that with Xamarin, we still just support .NET Standard libraries, so you cannot use PostSharp on the Xamarin-targeting project itself..

Better integration with other logging frameworks

In PostSharp 6.7, we have released two new features for PostSharp Logging: log collecting and the multiplexer logging backend.

Log collecting allows you to reuse your existing logging code with PostSharp. And with the multiplexer backend, you can send your logging output to two or more targets (such as console and a third-party logging framework) at the same time.

This means that there is no need to replace your existing logging code when adopting PostSharp to your projects. PostSharp can now collect your existing manual logging from any framework. The multiplexer enables several new scenarios, including sending your logging output to targets in different logging frameworks at the same time.

For more details, see this blog post explaining all you need to know about the new features.

Summary

With PostSharp 6.7 we’re bringing back support to Xamarin and introducing support for Blazor. For those using PostSharp Logging, we’re introducing 2 new exciting features: log collecting and multiplexer logging backend. You can now collect your existing manual logging from (almost) any framework and in addition you can send your logging output to targets in different logging frameworks at the same time.

Blazor and Xamarin support comes with few limitations which you can read more about in Blazor blog post and in 6.7 preview announcement.

As always, it is a good time to update your VS extension and NuGet packages, and report any problem via our support forum.

Happy PostSharping!

Error monitoring and detailed logging of an ASP.NET Core application with PostSharp and elmah.io

$
0
0

In this article, we show how to add error monitoring and detailed logging to an ASP.NET Core application. These features help you diagnose and fix errors. We will be using elmah.io, an error monitoring service, and PostSharp Logging, a .NET library for detailed logging.

We’ll start with an existing application that logs nothing and just by adding a few lines in the startup code, we will add automatic advanced error reporting. When we’re done, you’ll be able to browse through errors that occurred while processing web requests on the elmah.io website:

And you’ll be able to trace the root causes or other contextual information about the errors in detailed log files like this:

2020-09-22 23:55:13[DBG] IndexModel.OnPost() | Starting.
2020-09-22 23:55:13[DBG]   IndexModel.WriteInterestingFacts() | Starting.
2020-09-22 23:55:13[DBG]     IndexModel.get_NumberOfKgs() | Starting.
2020-09-22 23:55:13[DBG]     IndexModel.get_NumberOfKgs() | Succeeded: returnValue = "null".
2020-09-22 23:55:13[DBG]     IndexModel.GetKilograms("null") | Starting.
2020-09-22 23:55:13[WRN]     IndexModel.GetKilograms("null") | Failed: exception = {"System.ArgumentNullException"}.
System.ArgumentNullException: Value cannot be null. (Parameter 's')
   at System.Single.Parse(String s, IFormatProvider provider)
   at PostSharp.Samples.Logging.ElmahIo.Pages.IndexModel.GetKilograms(String numberOfStars) in C:\src\blog\PostSharp.Samples\Diagnostics\PostSharp.Samples.Logging.ElmahIo\Pages\Index.cshtml.cs:line 88
2020-09-22 23:55:13[DBG]   IndexModel.WriteInterestingFacts() | Succeeded.
2020-09-22 23:55:13[DBG] IndexModel.OnPost() | Succeeded.

Example application

In this article, we’ll be modifying a .NET Core web app based on the starter template that we call “What am I made of?”. You enter your mass in kilograms and the app gives you fun and interesting factoids about your body:

Now suppose that we released it and users started writing emails to us that the app isn’t calculating factoids for them. How do you figure out what’s wrong?

Adding error monitoring with elmah.io

elmah.io is a cloud-based error monitoring service. Your application functions as a client that sends warnings and errors over the internet to elmah.io servers and you, as the app maintainer, can then browse through these errors and analyze them in the elmah.io web dashboard.

You’ll need to create a new account (there is a free trial) to get your API key and log ID.

We’ll be using Serilog as the in-between link between PostSharp and elmah.io.

You’ll need some NuGet packages. Download and install these:

  • Serilog.Sinks.ElmahIO (sends data from Serilog to the elmah.io service)
  • Elmah.Io.AspNetCore.Serilog (adds additional request-based information to logs)

In your Main method, set up Serilog logging like this:

// Set up Serilog:Log.Logger=newLoggerConfiguration().MinimumLevel.Debug()// Capture all logs (PostSharp by default logs most traces at the Debug level).Enrich.FromLogContext()// Add information from the web request to Serilog (used by elmah.io).WriteTo.ElmahIo(newElmahIoSinkOptions("YOUR_API_KEY",// Use key and ID from your elmah.io accountnewGuid("YOUR_LOG_ID")){MinimumLogEventLevel=LogEventLevel.Warning// only send warnings and errors to elmah.io}).CreateLogger();

In your Startup class, add this after app.UseAuthorization();:

// This adds additional properties with information about// the web request to Serilog logging events:app.UseElmahIoSerilog();

Now you’re set up to use Serilog with elmah.io. If you added Serilog logging to your application, log lines at warning level or above would get sent to elmah.io.

But we want to have logging and error monitoring without adding manual logging statements to the application, so we’ll proceed with setting up PostSharp instead.

Adding detailed logging with PostSharp Logging

PostSharp Logging is a library that adds automatic detailed logging to your code. You annotate your code with attributes, and PostSharp adds logging statements in your methods on its own during compilation.

To set it up, you’ll need to get a license and install it in your application. One way to install it is using our Visual Studio extension but you can also add it to the source code directly. There is a free 45-day trial as well as a free edition with some limitations.

To set it up, you first need some additional NuGet packages:

  • PostSharp.Patterns.Diagnostics (transforms [Log] attributes into logging statements)
  • PostSharp.Patterns.Diagnostics.Serilog (sends automatic logging to Serilog)
  • Serilog.Sinks.File (saves Serilog logs into a file)
  • Serilog.Sinks.ColoredConsole (when developing, it’s faster to check the console than a file)

Then, in your initialization code, replace the Serilog initialization code with this:

// Set up Serilog:conststringformatString=@"{Timestamp:yyyy-MM-dd HH:mm:ss}[{Level:u3}] {Indent:l}{Message}{NewLine}{Exception}";Log.Logger=newLoggerConfiguration().MinimumLevel.Debug()// Capture all logs (PostSharp by default logs most traces at the Debug level).Enrich.FromLogContext()// Add information from the web request to Serilog (used by elmah.io).WriteTo.ColoredConsole(outputTemplate:formatString)// Pretty formatting and indentation for console/file.WriteTo.File("log.log",outputTemplate:formatString).WriteTo.ElmahIo(newElmahIoSinkOptions("YOUR_API_KEY",// Use key and ID from your elmah.io accountnewGuid("YOUR_LOG_ID")){MinimumLogEventLevel=LogEventLevel.Warning// only send warnings and errors to elmah.io}).CreateLogger();// Set up PostSharp Logging:LoggingServices.DefaultBackend=newSerilogLoggingBackend(Log.Logger){Options={// Add exception stack traces to both detailed and elmah.io logs:IncludeExceptionDetails=true}};

This means that PostSharp Logging will now create a log line into Serilog for each method you annotate with the attribute [Log]. But, you can also use multicasting to annotate the entire assembly by putting this line at the top of any file:

// Add PostSharp Logging to all methods and properties in the entire application:[assembly:Log]

That’s all the code we need to write. Let’s see it in action now!

Debugging an issue with error monitoring and logging

Let’s get back to our example app and suppose that we’re receiving reports from users that the facts aren’t being calculated. So you open your elmah.io dashboard and look for warnings and errors. You find one particular suspicious warning:

Even though the exception was handled by the app, PostSharp reported it to elmah.io and you get the stack trace and also the method name and arguments. You can expand the line to go through even more information about the request provided by elmah.io.

Now, you may already be groaning or smiling, having spent hours on this kind of bug in other applications. But if you haven’t, it’s strange, right?

The exception message isn’t very helpful, but PostSharp Logging tells you the method received “82.6” as input, and elmah.io reports 82.6 as a POST parameter. That’s a valid number of kilograms a person might weigh, and indeed, if you type this number in the browser yourself, it works. But you can’t really close this as works-for-me: you have the evidence that it doesn’t right in front of you in the dashboard.

But you can now look at more detailed tracing of what happened during the request. Let’s open up the detailed log file at around the timestamp when the exception occurred.

Here’s what we get:

2020-09-22 11:58:22[DBG] IndexModel.OnPost() | Starting.
2020-09-22 11:58:22[DBG]   IndexModel.WriteInterestingFacts() | Starting.
2020-09-22 11:58:22[DBG]     IndexModel.get_NumberOfKgs() | Starting.
2020-09-22 11:58:22[DBG]     IndexModel.get_NumberOfKgs() | Succeeded: returnValue = "82.6".
2020-09-22 11:58:22[DBG]     IndexModel.GetKilograms("82.6") | Starting.
2020-09-22 11:58:22[DBG]       IndexModel.GetUserCulture() | Starting.
2020-09-22 11:58:22[DBG]       IndexModel.GetUserCulture() | Succeeded: returnValue = {cs-CZ}.
2020-09-22 11:58:22[WRN]     IndexModel.GetKilograms("82.6") | Failed: exception = {System.FormatException}.
System.FormatException: Input string was not in a correct format.
  at System.Number.ThrowOverflowOrFormatException(ParsingStatus status, TypeCode type)
  at System.Number.ParseSingle(ReadOnlySpan`1 value, NumberStyles styles, NumberFormatInfo info)
  at System.Single.Parse(String s, IFormatProvider provider)
  at PostSharp.Samples.Logging.ElmahIo.Pages.IndexModel.GetKilograms(String numberOfStars) in C:\src\blog\PostSharp.Samples\Diagnostics\PostSharp.Samples.Logging.ElmahIo\Pages\Index.cshtml.cs:line 88
2020-09-22 11:58:22[DBG]     IndexModel.set_Result("I can't tell you anything about your body.") | Starting.
2020-09-22 11:58:22[DBG]     IndexModel.set_Result("I can't tell you anything about your body.") | Succeeded.
2020-09-22 11:58:22[DBG]   IndexModel.WriteInterestingFacts() | Succeeded.
2020-09-22 11:58:22[DBG] IndexModel.OnPost() | Succeeded.

And there’s our root cause. PostSharp tells you that the method GetUserCulture, called right before the failing method call, returned cs-CZ, a culture that uses a comma as the decimal separator! The application must have been programmed to use the client’s culture which explains why the same input works for you and doesn’t for some of your users.

Conclusion

In this article, we presented a way to combine elmah.io and PostSharp to add detailed automatic logging and easy-to-use error reporting to an ASP.NET Core application, and how to use these features to find and fix errors.

You can learn more about elmah.io here, more about PostSharp here, and you can download our example application at GitHub.

Debugging from devices with Conveyor and PostSharp Logging

$
0
0

If you’re developing a web application, there are times you want to test and debug it from devices other than your development machine: from a phone that’s not connected to the local network, for instance, or from an online service if you’re building a webhook. In this article, we’ll show how you can do that – even if the website is not yet deployed – and also how you can find errors in your code in such a situation with detailed automatic logging.

We will be using Conveyor, a tool that builds a tunnel between your local machine and the Internet and PostSharp Logging, which adds highly detailed logging to your .NET projects with zero impact on source code.

Example application

We’ll start with an example ASP.NET application: a time tracker that allows users to register time spent working. You can download the code from GitHub.

In the app, we have a “clock in” form where you fill in your hours:

But we also added another feature: if you access the site from a mobile device, the form has an extra field, “location”, so that you register from where you are working. So now we want to test that the extra field indeed displays for mobile devices and that it works.

Debugging over net

To do that, we’ll need a mobile device to connect to the server. This would be easy if the server was publicly accessible but during development, it’s probably running on an IIS Express local server that only accepts connections from localhost.

We’ll use Conveyor, a free Visual Studio extension, to connect to that server anyway.

Download and install the Conveyor extension and then run the example application from Visual Studio. Conveyor will show you an address-and-port at which you can access the application from other devices on the LAN:

Alternatively (eg. if your test phone isn’t on the same LAN), you can also tunnel the connection through a Conveyor server over the internet (here’s an explanation). To do this, click “Access Over Internet” and follow the prompts which include registering on the Conveyor website.

Eventually, you will get a URL that you can connect to from your phone or tablet, wherever they are.

You connect… and see that the webpage looks just the same as on desktop. The Location field is not there:

There’s probably a bug.

Detailed logging

You can think of several places where the bug might be hiding. The HTML or CSS might be bad, the rendering of HTML might have failed, device detection might not have worked right. There are, of course, several ways to tackle this, but in this article, we’ll go with analyzing logs.

Now, our application doesn’t currently log anything, and we don’t want to spend time adding logging everywhere so we’ll go with an automatic solution: PostSharp Logging.

Add the following NuGet packages to your project:

  • PostSharp.Patterns.Diagnostics
  • PostSharp.Patterns.Diagnostics.Serilog
  • Serilog.Sinks.File

Then, at the entry point of the application (in our case, that’s Application_Start in Global.asax), add the following code:

// Configure Serilog to send logging events to a file:conststringtemplate="{Timestamp:yyyy-MM-dd HH:mm:ss} [{Level:u3}] {Indent:l}{Message}{NewLine}{Exception}";Log.Logger=newLoggerConfiguration().MinimumLevel.Debug().WriteTo.File(@"C:\Logs\log.log",outputTemplate:template).CreateLogger();// Configure PostSharp to send automatic logging to Serilog:LoggingServices.DefaultBackend=newSerilogLoggingBackend(Log.Logger);

Then, add automatic logging to all methods in your application by adding this line at the beginning of any one file in your project:

// Apply logging to all methods and properties:[assembly:Log]

In a real-life project, you will soon figure out that this is way too verbose. You can select which methods, classes or namespaces to trace with multicasting.

Finally, you will need to get a license for PostSharp Logging, either a free trial or a free Community edition with some limitations. You can install the license either by installing the PostSharp Visual Studio extension or by adding the license to your source code.

Let’s now run the program again from Visual Studio, connect to the page from a phone and look in the log file. Here’s what we find:

2020-09-24 11:41:47 [DBG] HomeController.ClockIn() | Starting.
2020-09-24 11:41:47 [DBG]   UserDevice..ctor("Mozilla/5.0 (Linux; Android 9; SM-A530F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36") | Starting.
2020-09-24 11:41:47 [DBG]     UserDevice.DetermineDeviceFormat("Mozilla/5.0 (Linux; Android 9; SM-A530F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36") | Starting.
2020-09-24 11:41:47 [DBG] HomeController.Index() | Succeeded: returnValue = {System.Web.Mvc.ViewResult}.
2020-09-24 11:41:47 [DBG]     UserDevice.DetermineDeviceFormat("Mozilla/5.0 (Linux; Android 9; SM-A530F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36") | Succeeded: returnValue = {Desktop}.
2020-09-24 11:41:47 [DBG]     UserDevice.set_Format({Desktop}) | Starting.
2020-09-24 11:41:47 [DBG]     UserDevice.set_Format({Desktop}) | Succeeded.
2020-09-24 11:41:47 [DBG]   UserDevice..ctor("Mozilla/5.0 (Linux; Android 9; SM-A530F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36") | Succeeded.
2020-09-24 11:41:47 [DBG]   UserDevice.get_Format() | Starting.
2020-09-24 11:41:47 [DBG]   UserDevice.get_Format() | Succeeded: returnValue = {Desktop}.
2020-09-24 11:41:47 [DBG] HomeController.ClockIn() | Succeeded: returnValue = {System.Web.Mvc.ViewResult}.

The log file traces how the web request proceeded. The most suspicious line is this:

UserDevice.DetermineDeviceFormat("Mozilla/5.0 (Linux; Android 9; SM-A530F) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36") | Succeeded: returnValue = {Desktop}.

It seems the application has a method named DetermineDeviceFormat that accepts a user-agent string and, in this case, returned a value named “Desktop”.

This is a good clue: it is likely that the bug is in identifying whether a device is mobile or desktop and that the body of the method DetermineDeviceFormat is at fault. Sure enough, when we look inside, we find that the method tests the user string for the word “android” rather than “Android” and so fails to identify that the client is mobile.

Conclusion

In this article, we presented two ways to improve your debugging experience: with Conveyor, you gain the ability to access your local development server from other devices, or even from the internet, and with PostSharp Logging, you can add useful detailed and automatic logging.

You can learn more about Conveyor and about PostSharp Logging at their respective websites, and you can download our example project from GitHub.

Announcing PostSharp 6.7: Support for Blazor and Xamarin, and better integration with other logging frameworks

$
0
0

Just 2 weeks after releasing PostSharp 6.7 RC, we are happy to announce the general availability of PostSharp 6.7. This version is available for download on our website.

Starting from 6.7 release, the support for Xamarin is back and we have introduced the support for Blazor. In addition, we are excited to announce the release of two new features for PostSharp Logging: log collecting and the multiplexer logging backend. What this means is that PostSharp Logging now allows for collecting logs from other logging frameworks into PostSharp Logging, and writing from PostSharp Logging into multiple target frameworks.

Here is the summary of all new features in PostSharp 6.7:

  • Blazor support

You can now use PostSharp Framework and selected Patterns libraries in your Blazor applications. Note that adding PostSharp directly to a Blazor application project is not supported. Read more here.

  • Xamarin support

You will be able to use PostSharp in .NET Standard projects that can then be referenced in your Xamarin application project. Note that with Xamarin, we still just support .NET Standard libraries, so you cannot use PostSharp on the Xamarin-targeting project itself. Read more here.

  • Better integration with other logging frameworks

There is no need to replace your existing logging code when adopting PostSharp to your projects and the multiplexer features now enables several new scenarios, including sending your logging output to targets in different logging frameworks at the same time. Read more here.

 

For more details, please read the 6.7 RC announcement.

Happy PostSharping!

Intercepting methods with PostSharp Community

$
0
0

Method interception is a technique where you annotate a method and then when it’s called, an interceptor is executed instead of the method body. PostSharp Community, the free edition of PostSharp, allows you to add such interceptors to your code.

Let’s examine method interception by going through some examples from my own use:

[Retry]

Suppose you have a method that tries to write a transaction to a highly contested database. Sometimes, the transaction fails and throws an exception but in that case you want to try it again, since it will likely succeed when run a second time. Without method interception, you could do it like this:

voidSaveToDatabase(Filefile){while(true){try{// Execute transaction...}catch(TransactionRolledBackException){// Let's try again.continue;}}}

This works fine but you probably have more than one such method. Also, the retry functionality is probably more complicated: there may be a maximum number of retries, a timeout, or logging of failures. So you’ll want to factor the retry code out.

Here’s how you do it with PostSharp interception. First, you create the interception aspects:

[Serializable]publicclassRetryAttribute:MethodInterceptionAspect{publicoverridevoidOnInvoke(MethodInterceptionArgsargs){while(true){try{args.Proceed();}catch(TransactionRolledBackException){// Let's try again.continue;}}}}

Here, args.Proceed() means “call the original method”.

Then, you apply the Retry method interception aspect to your main code:

[Retry]voidSaveToDatabase(Filefile){// Execute transaction...}

Now, during compilation, PostSharp rewrites your SaveToDatabase method so that its method body only calls the interception aspect, and the original method body is called only when the interception aspect calls args.Proceed.

We have a more advanced retry sample as well.

[Cache]

I said that the original method body is called only when the aspect calls args.Proceed. That also means the original method body might not get called at all.

You could create an aspect like this:

[Serializable]publicclassCacheAttribute:MethodInterceptionAspect{privateboolcached;privateobjectknownResult;publicoverridevoidOnInvoke(MethodInterceptionArgsargs){if(cached){args.ReturnValue=knownResult;}else{args.Proceed();knownResult=args.ReturnValue;cached=true;}}}

If you then annotate a method with [Cache], the method body will only run once. Subsequent calls will only return the cached knownResult value.

This example doesn’t take into account that you might want to call the method body again if the method was called on a different object, with different parameters or if enough time already elapsed.

We have a more advanced caching sample as well.

[RunOnOwnThread]

Xunit normally runs tests on thread pool threads. But, in some unit tests for the PostSharp Threading library, I need to make sure that the unit test runs on a non-thread-pool thread.

Method interception could help here as well, with this aspect:

[Serializable]publicclassRunOnOwnThread:MethodInterceptionAspect{publicoverridevoidOnInvoke(MethodInterceptionArgsargs){ExceptioncaughtException=null;Threadt=newThread(()=>{try{args.Proceed();}catch(Exceptionex){caughtException=ex;}});t.Start();t.Join();if(caughtException!=null){throwcaughtException;}}}

Then, by writing unit tests like this:

[Fact,RunOnOwnThread]publicvoidTestThreadPoolOperation(){// unit test here}

I had a guarantee that the unit test will run on its own thread.

Works on all methods

PostSharp interception aspects work on all kinds of methods: you can intercept both instance and static methods, and both public and private methods. Unlike with interceptors of dependency injection frameworks, you don’t need interfaces and the methods don’t need to be virtual.

That’s because PostSharp does interception by rewriting the method bodies of the existing methods rather than by subclassing and creating overrides.

Conclusion

With method interception aspects, you can add cross-cutting functionality to your methods with attributes. This keeps your code clean and avoids code duplication. This functionality is now part of PostSharp Community Edition, which is available for free.

For more information, see our documentation or sample code.

When code can't fit your brain: NDepend and PostSharp

$
0
0

The size and complexity of codebases have exploded in the last decade. What can you do when your codebase no longer fits your brain? In this article I’ll suggest two completely different tools: NDepend to visualize the code, and PostSharp to reduce its complexity.

Since PostSharp is itself a complex codebase, we’ll use NDepend to produce some interesting graphs out of it.

NDepend is like a swiss-army knife for .NET developers. Among the many use cases the tool can handle is providing visualizations that help you with understanding complex code bases.

PostSharp is an extension to C# that allows you to remove repetitive code from your projects.

Reducing complexity with PostSharp

With PostSharp, you can dramatically reduce the boilerplate code that stems from the implementation of design patterns or non-functional requirements like logging or caching.

Instead of cluttering your source code with boilerplate, you add a few custom attributes telling the compiler what needs to be done. These custom attributes are called aspects and they describe how the target code should be enhanced. At build time, after the C# compiler finishes, PostSharp injects the repetitive code directly into your binaries.

The result: your code is shorter, simpler, and more readable. Because some concerns like caching, logging, INotifyPropertyChanged or multithreading are abstracted away into aspects, your code is more likely to fit your brain.

For example, you can implement INotifyPropertyChanged just by adding an attribute:

[NotifyPropertyChanged]// 1. Add this.publicclassCustomerViewModel{publicCustomerModelCustomer{get;set;}publicstringFullName{get{// 2. Now, a change in Customer, Customer.FirstName or// Customer.PrincipalAddress.Line1 will trigger the event.returnstring.Format("{0} {1} from {2}",Customer?.FirstName,Customer?.LastName,Customer?.PrincipalAddress?.FullAddress);}}}

Or, you can implement method memoization like this:

[Cache]// 1. Add this.publicstaticCustomerGetCustomer(intid){returnExpensiveDatabaseCall();// 2. Runs only once per customer (until cache is invalidated)}

Understanding the PostSharp code base with NDepend

PostSharp lists 41 packages on NuGet, made from 75 C# projects (not including tests), many of those targeting several frameworks. Some parts of the code are 15 years old and some other address the newest features on .NET 5 and C# 9.

How to make sense of such a complex and extensive codebase? This is where NDepend can help.

Let’s start with a dependency graph of PostSharp assemblies, which in NDepend looks like this:

Note that I created this graph from assemblies, not from source code, so you can visualize even projects that you don’t own. The size of the box of each assembly corresponds to the assembly’s code size (specifically, the total number of lines of code if source code is available, otherwise the number of IL instructions).

This is a good first look at a new code base that can help you understand what is what. In our case, we see PostSharp.dll, our main redistributable library, on the right, and the entire Patterns library sitting to the left.

All the green assemblies come from PostSharp Logging because we have a separate library (and a NuGet package) for each logging framework that we support officially, and here we can see that they all depend on the main PostSharp Logging assembly, the Commons library and of course the main redistributable.

An inheritance diagram

Let’s now zoom in on the main feature of PostSharp: the aspect framework. In PostSharp, you add functionality to your code by annotating it with attributes which all extend the Aspect class.

With NDepend, we can easily create an inheritance diagram of all classes that extend Aspect across all of our assemblies, even if they are in different Visual Studio solutions:

The highlighted red box, the MethodInterceptionAspect, is the currently selected class. The blue classes are those used by the MethodInterceptionAspect and the green classes are those that subclass MethodInterceptionAspect.

MethodInterceptionAspect is an interceptor - a class that, when you apply it to a method, replaces that method’s body with its OnInvoke method (and you can then call the original method body from the OnInvoke).

With PostSharp, you can subclass MethodInterceptionAspect to create your own interceptors and you can see that aspects in our Caching and Threading libraries do just that.

Finally, let’s use NDepend to help debug a specific issue.

For example, suppose that we have a flaky test in our Caching product (n.b. our caching tests aren’t actually randomly failing). PostSharp Caching caches return values of your methods: you annotate a C# method with the [Cache] attribute and each time it’s called with the same arguments, you get the previous result instead of the method body being called again.

But what if a test says that sometimes, the original method body is called twice anyway? What we can do is find the only place that stores the cached value in PostSharp Caching, the SetItemCore() method, and create a caller-callee graph for it in NDepend, creating a complete map of all methods that call it, even indirectly. This is what that looks like:

The culprit may well be somewhere in this area. The class AutoReloadManager in particular seems suspicious. That could be the place that’s generating a random behavior and we can go investigate it.

Conclusion

The size and complexity of codebases has exploded during the last decade, and we need tools to cope with this challenge. In this article, we looked at two different but complementary approaches: NDepend and PostSharp.

Visualization tools like NDepend can help you explore and understand such code, for example, when you’re moving to work on another area of the code base.

Let’s clarify that NDepend is not just for code visualization. The tool can handle many other scenarios related to code quality, technical-debt estimation, code coverage by tests, code metrics, code querying and more. See a list of use cases here. You can download a free full-featured trial here.

The approach ofPostSharp, on the other side, is to help you eliminate boilerplate code, and reduce and simplify your code base. It also contains many useful .NET libraries that can add automatic detailed logging, method memoization, automatic and smart INotifyPropertyChanged and other features to your code.

You can download the free Community edition of PostSharp on the official website.


Multicasting: Enhance a group of methods with just one attribute

$
0
0

Attribute multicasting, in PostSharp, is a way to apply an aspect (such as method interception) to many types or methods with just one attribute instance. It’s at the core of the ability of PostSharp to reduce the number of lines of code in your project.

In the most basic use case, you annotate a class with a method interception/method boundary attribute and it’s multicast (applied) to all methods in that class, but in this blog post, we’ll go over some more advanced use cases as well.

Multicasting is included in all versions of PostSharp including the free PostSharp Community edition, and works also for community add-ins such as ToString and StructuralEquality.

What multicasting means

Suppose MyLog is a method boundary aspect that writes to standard output at the beginning and end of each target method. It could look like this:

[Serializable]publicclassMyLog:OnMethodBoundaryAspect{publicoverridevoidOnEntry(MethodExecutionArgsargs){Console.WriteLine("Entering");}publicoverridevoidOnExit(MethodExecutionArgsargs){Console.WriteLine("Leaving");}}

A method boundary aspect can target methods only, so if you apply it to a type, PostSharp attribute multicasting will instead cause it to apply to all methods of that type.

In the following example, if you write the code on the left, PostSharp will transform your code at build time as though you actually wrote the code on the right, but you avoided some code duplication.

diagram

Some attributes that enhance your code work on types rather than methods or members. For example, the community add-in ToString applies on types and synthesizes a ToString method for them.

You can use multicasting here as well. If you apply the ToString attribute to your assembly, it will instead apply to all classes in that assembly.

diagram

Note how properties of the add-in are copied to the actual target classes as well.

More precise targeting

We’ve seen that with multicasting, attributes “cascade down” from assemblies to types to members.

diagram

(Whether an attribute is type level or method level is determined by MulticastAttributeUsage. You can see documentation for details.)

But it is possible to choose your targets more precisely. Each multicast attribute has a set of properties whose names begins with Attribute, such as AttributeTargetTypes, which affect multicasting. (Almost all add-ins and all PostSharp aspects are multicast attributes.)

Here’s the MulticastAttribute properties that I find the most useful:

  • AttributeTargetTypes. The attribute is only multicast to types matching this wildcard expression or regex.
  • AttributeTargetMembers. The attribute is only multicast to methods matching this wildcard expression or regex.
  • AttributeTargetTypeAttributes. Only multicast to types which have these properties.
  • AttributeTargetMemberAttributes. Only multicast to methods which have these properties.

I’ll show this on a couple of examples:

[assembly:MyLog(AttributeTargetTypes="*ViewModel",AttributeTargetMemberAttributes=MulticastAttributes.Public)]

This means “Apply MyLog to all public methods of types whose names end in ViewModel.”

[assembly:MyLog(AttributeTargetMembers="regex:button[0-9]+_Click")]publicclassForm1:Form{...}

This means “Apply MyLog to all OnClick handlers on Form1 for buttons with default names.”

Even more precise targeting with exclusion

You can also use AttributeExclude in combination with AttributePriority to exclude some targets that you’ve previously included in multicasting. All attributes are processed in the order of AttributePriority and what comes out at the end of this processing becomes the set of targets to which the attribute is applied.

Here’s an example where the first line applies the aspect to all public methods, and the second line excludes getters and setters from the set:

code with exclusions

For more description and details, see our documentation. Make sure not to accidentally use “AspectPriority” which is a different property from AttributePriority, and doesn’t affect multicasting.

Where to use multicasting

In general, multicasting is useful and safe when applying the aspect to an unintended target does not have grave impact on functionality or performance.

I find multicasting the most helpful in the following cases:

  • Logging and profiling. In PostSharp Community, you can use PostSharp Logging for free in Developer Mode, and you can also write your own logging aspect (sample) or profiling aspect (sample). You may want to apply logging or profiling to large portions of your codebase, or — especially during debugging — to all methods in selected classes.
  • ToString. The community add-in ToString is useful in many classes. You can multicast ToString to absolutely every class (it won’t be applied to classes that override ToString so it’s safe).
  • Security. If you multicast an authorization aspect to all fields or methods so that some permissions are required, you are guaranteed to never forget to request a permission. With security, an opt-out approach is often safer than opt-in because applying it to an unintended field is preferable to not applying the aspect by mistake.

In some cases, not even multicasting will be precise enough for you. In those cases, you can use aspect providers or compile time validation. It is also possible to multicast aspects to subclasses using multicast inheritance.

Conclusion

We often say that one of the benefits of PostSharp is that it saves you from writing extra repetitive lines of code.

Multicasting is one of the ways that allows you to do that, by factoring out common code and applying it at the class or assembly level.

For more examples and help with multicasting, see our short summary on GitHub or our product documentation.

Action required: updating to Visual Studio 16.8 may break your build with PostSharp 6.5-6.7

$
0
0

We want to notify you that PostSharp may fail your builds after updating Visual Studio to version 16.8. This happens because .NET 5.0 SDK is installed together with this new version and takes precedence over previous versions of .NET Core SDK. The solution is to pin your repo to .NET Core 3.1 SDK by editing your global.json file.

We apologize for inconvenience. We regret that we failed to identify it before the new release of Visual Studio.

Who is affected?

This problem affects users of PostSharp 6.5, 6.6 or 6.7 updating Visual Studio to version 16.8 and then building a .NET Core or .NET Standard project.

What will happen?

You will get a warning about unsupported .NET SDK version and subsequent build failure.

Why does it happen?

Because the Visual Studio Installer installs .NET 5.0 SDK, which becomes the default SDK for all projects or repoes that are not pinned to a specific SDK version. Versions of PostSharp prior to 6.8 do not support .NET 5.0 SDK, try to build anyway, but fail.

What can you do?

To resolve this you have following options:

Install the latest version of .NET Core 3.1 SDK and override the SDK version in the global.json file in the root of your repository:

{"sdk":{"version":"3.1.404","rollForward":"latestPatch"}}

You should also consider staying on Visual Studio 16.7 servicing baseline unless you upgrade PostSharp to version 6.8 (see below).

2. Update to PostSharp 6.8

Upgrade to PostSharp 6.8, which supports VS 16.8 and .NET 5 SDK and is currently in preview. We are expecting to publish an RC next week. Note that updating PostSharp requires an up-to-date support subscription.

Noisy logs? Improve your signal-to-noise ratio with per-request logging and sampling

$
0
0

PostSharp Logging makes it so simple to add logging to your application that you can easily end up capturing gigabytes of data every minute, taking a big overhead on run-time performance, network bandwidth, and storage. But let’s face it, most of this data won’t ever be useful. Starting from v6.8, PostSharp Logging allows you to define precisely which requests should be logged in HD, and which not.

For instance, if you are running a web application, it is probably useless to log every single request with the highest level of detail, especially for types of requests that are served 100 times per second. Therefore, it is important to be able to decide, at run time, which requests need to be logged. You may choose to disable logging by default and to enable logging only for select requests only.

We call that per-request or, more generally, per-transaction logging.

It has been possible to do per-request logging with PostSharp for a long time, but with PostSharp 6.8, it becomes really easy.

In this article, I will assume I have some ASP.NET Core (or ASP.NET 5) application and I want to add logging to it as an afterthought.

You can download the source code of this example on GitHub.

Step 1. Add PostSharp Logging to your app

  1. Add the PostSharp.Patterns.Diagnostics package to the projects that you want to log.

  2. Add the following custom attributes to your code. This will add logging to everything (really, every single method) but property getters or setters. You will likely need to tune this code to improve the signal-noise ratio.

    usingPostSharp.Patterns.Diagnostics;// Add logging to everything[assembly:Log(AttributePriority=0)]// Remove logging from property getters and setters[assembly:Log(AttributePriority=1,AttributeExclude=true,AttributeTargetMembers="regex:get_.*|set_.")]
  3. In your Program.Main, initialize PostSharp Logging. In this example we will direct the output to the system console, but you can use virtually any logging framework like log4net, Serilog or Microsoft’s abstractions. See the documentation for details.

    namespacePostSharp.Samples.Logging.PerRequest{publicclassProgram{publicstaticvoidMain(string[]args){// Initializes PostSharp Logging. You can plug your own framework here.LoggingServices.DefaultBackend=newConsoleLoggingBackend();CreateHostBuilder(args).Build().Run();}publicstaticIHostBuilderCreateHostBuilder(string[]args)=>Host.CreateDefaultBuilder(args).ConfigureWebHostDefaults(webBuilder=>{webBuilder.UseStartup<Startup>();});}}
  4. Start your web app from the command line using dotnet run, do a few requests from your browser, and you will see log records appearing:

       Debug     | Trace  | Program.CreateHostBuilder([]) | Starting.
       Debug     | Trace  | Program.CreateHostBuilder([]) | Succeeded: returnValue = {Microsoft.Extensions.Hosting.HostBuilder}.
       Debug     | Trace  | Startup..ctor({Microsoft.Extensions.Configuration.ConfigurationRoot}) | Starting.
       Debug     | Trace  | Startup..ctor({Microsoft.Extensions.Configuration.ConfigurationRoot}) | Succeeded.
       Debug     | Trace  | Startup.ConfigureServices([ {ServiceType: Microsoft.Extensions.Hosting.IHostingEnvironment Lifetime: Singleton ImplementationInstance: Microsoft.Extensions.Hosting.Internal.HostingEnvironment}, {ServiceType: Microsoft.Extensions.Hosting.IHostEnvironment Lifetime: Singleton ImplementationInstance: Microsoft.Extensions.Hosting.Internal.HostingEnvironment}, {ServiceType: Microsoft.Extensions.Hosting.HostBuilderContext Lifetime: Singleton ImplementationInstance: Microsoft.Extensions.Hosting.HostBuilderContext}, {ServiceType: Microsoft.Extensions.Configuration.IConfiguration Lifetime: Singleton ImplementationFactory: Microsoft.Extensions.Configuration.IConfiguration <CreateServiceProvider>b__26_0(System.IServiceProvider)}, {ServiceType: Microsoft.Extensions.Hosting.IApplicationLifetime Lifetime: Singleton ImplementationFactory: Microsoft.Extensions.Hosting.IApplicationLifetime <CreateServiceProvider>b__26_1(System.IServiceProvider)}, and 64 more ]) | Starting.
       Debug     | Trace  | Startup.ConfigureServices([ {ServiceType: Microsoft.Extensions.Hosting.IHostingEnvironment Lifetime: Singleton ImplementationInstance: Microsoft.Extensions.Hosting.Internal.HostingEnvironment}, {ServiceType: Microsoft.Extensions.Hosting.IHostEnvironment Lifetime: Singleton ImplementationInstance: Microsoft.Extensions.Hosting.Internal.HostingEnvironment}, {ServiceType: Microsoft.Extensions.Hosting.HostBuilderContext Lifetime: Singleton ImplementationInstance: Microsoft.Extensions.Hosting.HostBuilderContext}, {ServiceType: Microsoft.Extensions.Configuration.IConfiguration Lifetime: Singleton ImplementationFactory: Microsoft.Extensions.Configuration.IConfiguration <CreateServiceProvider>b__26_0(System.IServiceProvider)}, {ServiceType: Microsoft.Extensions.Hosting.IApplicationLifetime Lifetime: Singleton ImplementationFactory: Microsoft.Extensions.Hosting.IApplicationLifetime <CreateServiceProvider>b__26_1(System.IServiceProvider)}, and 64 more ]) | Succeeded.
       Debug     | Trace  | Startup.Configure({Microsoft.AspNetCore.Builder.ApplicationBuilder}, {Microsoft.AspNetCore.Hosting.HostingEnvironment}) | Starting.
       Debug     | Trace  | Startup.Configure({Microsoft.AspNetCore.Builder.ApplicationBuilder}, {Microsoft.AspNetCore.Hosting.HostingEnvironment}) | Succeeded.
       info: Microsoft.Hosting.Lifetime[0]
             Now listening on: https://localhost:5001
       info: Microsoft.Hosting.Lifetime[0]
             Now listening on: http://localhost:5000
       info: Microsoft.Hosting.Lifetime[0]
             Application started. Press Ctrl+C to shut down.
       info: Microsoft.Hosting.Lifetime[0]
             Hosting environment: Development
       info: Microsoft.Hosting.Lifetime[0]
             Content root path: C:\src\PostSharp.Samples\Diagnostics\PostSharp.Samples.Logging.PerRequest
       Debug     | Custom | AspNetCoreLogging | GET / | Starting.
       Debug     | Trace  | IndexModel..ctor({Microsoft.Extensions.Logging.Logger`1[PostSharp.Samples.Logging.PerRequest.Pages.IndexModel]}) | Starting.
       Debug     | Trace  | IndexModel..ctor({Microsoft.Extensions.Logging.Logger`1[PostSharp.Samples.Logging.PerRequest.Pages.IndexModel]}) | Succeeded.
       Debug     | Trace  | IndexModel.OnGet() | Starting.
       Debug     | Trace  | IndexModel.OnGet() | Succeeded.
       Debug     | Trace  | CommonCode.HelloWorld() | Starting.
       Debug     | Custom |   CommonCode | It seems logging is enabled.
       Debug     | Trace  | CommonCode.HelloWorld() | Succeeded.
       Debug     | Custom | AspNetCoreLogging | GET / | Success: StatusCode = 200.

So far so good, but you will soon discover that the amount of information is overwhelming.

Step 2. Configure verbosity using a configuration file

To avoid being submerged in an ocean of logs, we now want to limit the logging only interesting requests. Imagine this scenario:

  • By default, only warnings and errors will be logged.
  • For the Privacy page, we know we have more problems and they are difficult to reproduce (please excuse this unlikely scenario but this is just an illustration), so we want to log with high verbosity all requests from this page.
  • We also have some problems on the Home page, but the traffic is too heavy to log every single request, so we just want to log one request every 10 seconds.

Let’s implement these rules in our app.

  1. Create a file postsharp-logging.config with the following content:
<?xmlversion="1.0"encoding="utf-8"?><logging><verbositylevel='warning'/><transactions><policytype='AspNetCoreRequest'if='t.Request.Path=="/Privacy"'><verbosity><sourcelevel='debug'/></verbosity></policy><policytype='AspNetCoreRequest'if='t.Request.Path=="/"'sample='OnceEveryXSeconds(10,t.Request.Path)'><verbosity><sourcelevel='debug'/></verbosity></policy></transactions></logging>
  1. Add the packages PostSharp.Patterns.Diagnostics.AspNetCore and PostSharp.Patterns.Diagnostics.Configuration to your top-level project (it is not necessary to add it to all projects in your solution).

  2. In your Program.Main, add a call to AspNetCoreLogging.Initialize(). This initializes the interception of incoming web requests in ASP.NET Core.

  3. Also in your Program.Main, configure the logging back-end with the ConfigureFromXml method.

This will give you the following code:

publicstaticvoidMain(string[]args){AspNetCoreLogging.Initialize();LoggingServices.DefaultBackend=newConsoleLoggingBackend();LoggingServices.DefaultBackend.ConfigureFromXml(XDocument.Load("postsharp-logging.config"));CreateHostBuilder(args).Build().Run();}

Now if you run your application in a console and test it from a browser, you will see the logging is no longer so verbose, but it follows the rules we defined.

Step 3. Modify the verbosity on the fly

Now suppose your application has been deployed to production for a while and you’re starting to notice frequent errors for some specific query string. You want to gather more detailed logs about these requests, but without redeploying your app.

To get prepared for this scenario, you need to store your logging configuration not within your application, but in a cloud storage service (or any HTTP server).

In this example we will use Google Drive. With the Share option of Google Drive, create a public link to this file, e.g. a link that everyone can see (but not edit!). For instance:

https://drive.google.com/file/d/1L50ddULX1ZXHscNN0CjYvjrwHRqd8Lqm/view?usp=sharing

In this link, 1L50ddULX1ZXHscNN0CjYvjrwHRqd8Lqm is the file identifier. Append it to https://drive.google.com/uc?export=download&id=, for instance:

 https://drive.google.com/uc?export=download&id=1L50ddULX1ZXHscNN0CjYvjrwHRqd8Lqm

Then use this link to configure PostSharp Logging and specify a reload period:

LoggingServices.DefaultBackend.ConfigureFromXmlWithAutoReloadAsync(newUri("https://drive.google.com/uc?export=download&id=1L50ddULX1ZXHscNN0CjYvjrwHRqd8Lqm"),TimeSpan.FromSeconds(60));

You can now see in your log that PostSharp Logging periodically fetches the configuration:

LoggingConfigurationManager | Configuring {PostSharp.Patterns.Diagnostics.Backends.Console.ConsoleLoggingBackend} from {https://drive.google.com/uc?export=download&id=1L50ddULX1ZXHscNN0CjYvjrwHRqd8Lqm} | Starting.
LoggingConfigurationManager | Configuring {PostSharp.Patterns.Diagnostics.Backends.Console.ConsoleLoggingBackend} from {https://drive.google.com/uc?export=download&id=1L50ddULX1ZXHscNN0CjYvjrwHRqd8Lqm} | No change.

You can edit the configuration file online with Google Drive text editor, and the changes will be taken into account the next time the configuration is fetched.

What can possibly go wrong?

When you modify a configuration, you will need to monitor your log for a few minutes to make sure you didn’t do a mistake. If the configuration file is incorrect, your application will continue to work as usually, but the verbosity configuration file will be partially or totally ignored.

Look in your logs for exceptions like this:

TransactionPolicy | Cannot compile the expression OnceEveryXSeconds(10, t.Request.Path)x. The policy will be disabled. Exception = {DynamicExpresso.Exceptions.ParseException}.> DynamicExpresso.Exceptions.ParseException: Syntax error (at index 37).>    at DynamicExpresso.Parsing.Parser.ValidateToken(TokenId t, String errorMessage)>    at DynamicExpresso.Parsing.Parser.Parse()>    at DynamicExpresso.Interpreter.ParseAsLambda(String expressionText, Type expressionType, Parameter[] parameters)>    at DynamicExpresso.Interpreter.Parse(String expressionText, Type expressionType, Parameter[] parameters)>    at DynamicExpresso.Interpreter.Parse(String expressionText, Parameter[] parameters)>    at PostSharp.Patterns.Diagnostics.Transactions.ExpressionCompiler.CompilePredicate[T](String expression, Boolean defaultValue, Boolean isSamplingExpression)

The second thing you need to pay attention to is the security of the configuration file. Don’t let unauthorized people edit it. Eventually, this file allows them execute code within your application, and you cannot exclude that DynamicExpresso does not have a security gap.

Custom transactions

If your application also processes transactions (such as messages from a queue or message bus, or files from a directory), but is not based on ASP.NET Core, you can still use dynamic configuration but it takes a little more effort to prepare your code for it. See the documentation for details.

Summary

PostSharp Logging makes it very easy to create highly-detailed log, but quite often too much is too much. More often than not, you need basic logging for 99.9% of your requests and super-detailed logging for 0.1%. And when you app runs in production, you don’t want to redeploy it just to change the level of logging.

With PostSharp Logging 6.8, it is now a question of minutes to implement this scenario. You can store your logging configuration in an online drive and configure your application to reload it periodically.

Happy PostSharping!

Distributed logging with Serilog, Elastic Search, and PostSharp

$
0
0

When you have distributed application, for instance a set of microservices, it may be challenging to understand the execution logs unless you have the right logging settings and infrastructure in place. This article shows how to propertly configure PostSharp Logging 6.8, Serilog, and Elasticsearch for this scenario.

Logging in a distributed application used to be challenging. In a recent past, every application produced its own log files, and tracking a request across several log files was a painful task.

More and more vendors allow you to store logs from several applications in the cloud, or in a central database, which makes them easier to retrieve and analyze. Elasticsearch is an open-source document database that is often used to store distributed logs. Elasticsearch has an adapter for Serilog and is easy to use from .NET.

Even with Elasticsearch, the developer is still responsible of properly setting the correlation IDs and the cross-context properties. And, of course, as a developer you also have to add logging to your entire application – which takes a lot of effort and results in annoying boilerplate.

The mission of PostSharp Logging has always been to add logging to your apps automatically, without affecting your source code. Starting with version 6.8, it also integrates much better with scenarios of distributed logging.

The rest of this article is based on a sample available on GitHub. This sample is composed of a command-line client and an ASP.NET Core Web API. In the rest of this article, for simplicity, we will assume that every role of your application is both a client and a server.

Step 1. Add logging to all projects (including class libraries)

  1. Add the PostSharp.Patterns.Diagnostics package to all projects that you want to log.

  2. Add the following custom attributes to your code. This will add logging to everything (really, every single method) but property getters or setters. You will likely need to tune this code to improve the signal-noise ratio.

    usingPostSharp.Patterns.Diagnostics;// Add logging to everything[assembly:Log(AttributePriority=0)]// Remove logging from property getters and setters[assembly:Log(AttributePriority=1,AttributeExclude=true,AttributeTargetMembers="regex:get_.*|set_.")]

Step 2. Configure web projects

  1. Add the following packages:

    • PostSharp.Patterns.Diagnostics.HttpClient
    • PostSharp.Patterns.Diagnostics.AspNetCore
    • PostSharp.Patterns.Diagnostics.Serilog
    • Serilog.Extensions.Logging
    • Serilog.AspNetCode
    • Serilog.Sinks.Console
    • Serilog.Sinks.ElasticSearch
  2. Initialize Serilog in your Program.Main. We set up two sinks: the console, and Elasticsearch.

    usingvarlogger=newLoggerConfiguration().Enrich.WithProperty("Application",typeof(Program).Assembly.GetName().Name).MinimumLevel.Debug().WriteTo.Elasticsearch(newElasticsearchSinkOptions(newUri("http://localhost:9200")){BatchPostingLimit=1,// For demo.AutoRegisterTemplate=true,AutoRegisterTemplateVersion=AutoRegisterTemplateVersion.ESv6,EmitEventFailure=EmitEventFailureHandling.ThrowException|EmitEventFailureHandling.WriteToSelfLog,FailureCallback=e=>Console.WriteLine("Unable to submit event "+e.MessageTemplate),}).WriteTo.Console(outputTemplate:"{Timestamp:yyyy-MM-dd HH:mm:ss} [{Level:u3}] {Indent:l}{Message:l}{NewLine}{Exception}").CreateLogger())
  3. Configure PostSharp Logging

    varbackend=newSerilogLoggingBackend(logger);backend.Options.IncludeActivityExecutionTime=true;backend.Options.IncludeExceptionDetails=true;backend.Options.SemanticParametersTreatedSemantically=SemanticParameterKind.All;backend.Options.IncludedSpecialProperties=SerilogSpecialProperties.All;backend.Options.ContextIdGenerationStrategy=ContextIdGenerationStrategy.Hierarchical;LoggingServices.DefaultBackend=backend;
  4. Set up PostSharp Logging to capture outgoing and incoming HTTP requests. To enable correlation, we have to pass an implementation of the ICorrelationProtocol. The only one that’s available out of the box is LegacyHttpCorrelationProtocol, which is called legacy because this is how it is called in .NET (the specification of the replacement is not yet final):

    AspNetCoreLogging.Initialize(correlationProtocol:newLegacyHttpCorrelationProtocol());HttpClientLogging.Initialize(correlationProtocol:newLegacyHttpCorrelationProtocol(),requestUriPredicate:uri=>uri.Port!=9200);

Your web projects startup code should look now like this:

publicstaticvoidMain(string[]args){// Configure Serilog to write to the console and to Elastic Search.usingvarlogger=newLoggerConfiguration().Enrich.WithProperty("Application",typeof(Program).Assembly.GetName().Name).MinimumLevel.Debug().WriteTo.Elasticsearch(newElasticsearchSinkOptions(newUri("http://localhost:9200"));BatchPostingLimit=1,// For demo.AutoRegisterTemplate=true,AutoRegisterTemplateVersion=AutoRegisterTemplateVersion.ESv6,EmitEventFailure=EmitEventFailureHandling.ThrowException|EmitEventFailureHandling.WriteToSelfLog,FailureCallback=e=>Console.WriteLine("Unable to submit event "+e.MessageTemplate),}).WriteTo.Console(outputTemplate:"{Timestamp:yyyy-MM-dd HH:mm:ss} [{Level:u3}] {Indent:l}{Message:l}{NewLine}{Exception}").CreateLogger();// Configure PostSharp Logging to write to Serilog.varbackend=newSerilogLoggingBackend(logger);backend.Options.IncludeActivityExecutionTime=true;backend.Options.IncludeExceptionDetails=true;backend.Options.SemanticParametersTreatedSemantically=SemanticParameterKind.All;backend.Options.IncludedSpecialProperties=SerilogSpecialProperties.All;backend.Options.ContextIdGenerationStrategy=ContextIdGenerationStrategy.Hierarchical;LoggingServices.DefaultBackend=backend;// Instrument incoming HTTP requests.AspNetCoreLogging.Initialize();// Instrument outgoing HTTP requests but not those to Elasticsearch.HttpClientLogging.Initialize(uri=>uri.Port!=9200);// Execute the web app.CreateWebHostBuilder(args).Build().Run();}

Step 3. Start the Elastic stack

Before you run the projects, you need to start the Elastic stack. It is composed of three services: Elastic Search is the document database, Logstash is the ingestion servie, and Kibana is the dashboard (it’s also called the ELK stack).

If you don’t have a working ELK stack already, you can get configure one quickly thanks to Docker:

  1. Clone the PostSharp.Samples repo from GitHub.
  2. Open a command prompt with elevated privileges.
  3. Go to the PostSharp.Samples/Diagnostics/PostSharp.Samples.Logging.ElasticStack/elastic-stack/ directory.
  4. Execute this command:
docker-compose up

See README.md for details.

Step 4. Start your distributed application

We’re now ready. Start all components of your applications.

In our example, this is done using dotnet run.

Then execute a few requests to fill the logging server.

Step 5. Visualize the results in Kibana

  1. Open Kibana using your web browser at http://localhost:5601/.

  2. If it’s the first time you’re opening this Kibana instance, you will need to define an index. Use the pattern logstash-* and the Fime Filter field @timestamp.

  3. Go to Discover. You can see a lot of records.

    Here is the detail of one of those records:

       {
       "_index": "logstash-2020.11.23",
       "_type": "logevent",
       "_id": "QfLJ9XUBRLWFASabvGNb",
       "_version": 1,
       "_score": null,
       "_source": {
       "@timestamp": "2020-11-23T16:47:20.0195192+01:00",
       "level": "Debug",
       "messageTemplate": "{TypeName:l}.{MemberName:l}({Arg0:l}) | {RecordStatus:l}.",
       "message": "QueueProcessor.ProcessQueue(\".\\My\\Queue\") | Succeeded.",
       "fields": {
             "TypeName": "QueueProcessor",
             "MemberName": "ProcessQueue",
             "Arg0": "\".\\My\\Queue\"",
             "RecordStatus": "Succeeded",
             "#User": "Gaius Julius Caesar",
             "Indent": "  ",
             "IndentLevel": 1,
             "EventId": "|4e361fe67f.a2.a3.b36.",
             "SourceContext": "ClientExample.QueueProcessor",
             "Application": "PostSharp.Samples.Logging.Distributed.Client"
       },
       "renderings": {
             "TypeName": [ { "Format": "l", "Rendering": "QueueProcessor" } ],
             "MemberName": [ { "Format": "l", "Rendering": "ProcessQueue" } ],
             "Arg0": [ { "Format": "l", "Rendering": "\".\\My\\Queue\"" } ],
             "RecordStatus": [ { "Format": "l", "Rendering": "Succeeded" } ]  }
       },
       "fields": {  "@timestamp": [  "2020-11-23T15:47:20.019Z" ]  },
       "sort": [ 1606146440019 ]
       }
  4. Add a few interesting columns to the table:

    • fields.Application: the name of the originating application
    • level: the severity of the message
    • message: the human-readable message
    • fields.EventId: a hierarchical identifier

Step 6. Isolate a specific request

So far we’ve been able to gather a long list of log requests, but what if we want a consistent view of a single request?

This is simple thanks to the fields.EventId property. This identifier is synthetic, i.e. made of several parts, and cross-process. And the identifier of child activity or scope always starts with the identifier of its logical parent, even if it resides in a different process. To filter all log records of a single request, we need to find the identifier of the root node we’re interested in, then look for all log records whose identifer starts with the parent identifier.

In Kibana, you can type an expression like this in the search box (you will have to adjust the identifier):

fields.EventId: "|4e361fe67f.a2.a3.b33*"

This now gives us a consistent view of the request processing:

Step 7. Add a cross-process logging property (aka baggage)

It’s often useful to include in each log the name of user who has initiated the request. However, the user identity may not flow through the whole application. With PostSharp Logging, you can mark a property as being a baggage, which means that you want it to be transferred across processes. When you define a property as a baggage,HttpClientLogging will add it to the Correlation-Context HTTP header, and AspNetCoreLogging will read this header and interpret it properly.

Here is how to define a baggage for an execution context:

  1. Define a class with all needed properties. Exclude this class from logging and mark the cross-process properties with ` [LoggingPropertyOptions(IsBaggage = true)]`.

    [Log(AttributeExclude=true)]classBaggage{[LoggingPropertyOptions(IsBaggage=true)]publicstringUser{get;set;}}
  2. Wrap the activity with a call to OpenActivity and pass this baggage:

    publicclassMyClass{privatestaticreadonlyLogSourcelogSource=LogSource.Get();privateasyncTaskProcessRequest(){// ...using(logSource.Debug.OpenActivity(Formatted("Processing the request.."),newOpenActivityOptions(newBaggage{User="Gaius Julius Caesar"}))){awaitQueueProcessor.ProcessQueue(".\\My\\Queue");}}}

You can consider building this code into an ActionFilter or a PageFilter.

If you now run your distributed application, you can add fields.#User as a new column and see that the property is being preserved across processes:

Step 8. Configure per-request logging

You now probably have way too much logging in your application. Even if Elasticsearch is open source, operating a node in production is not cheap because of the resources it consumes. Therefore, you will need to keep your database within a manageable size: probably a few gigabytes. Therefore, it’s desirable to only log the requests that are important to you.

PostSharp Logging can be configured to log every request with a different level of verbosity – for instance just warnings by default, but everything for the /invoice API when it comes from the IP 12.64.347.3. Details in this blog post.

Summary

Producing a highly-detailed log of a distributed .NET application has become much simpler with PostSharp 6.8. By adding two packages to your project – one for incoming HTTP requests and one for outgoing requests – and calling their initialization method (one for the server), PostSharp Logging will start producing cross-process event identifiers that are easy to filter. It also supports baggage, i.e. cross-process logging properties.

Happy PostSharping!

Announcing PostSharp 6.8 RC: Support for .NET 5, C# 9 and improvements in logging

$
0
0

We are happy to announce the release of PostSharp 6.8 RC. Included in this release are support for .NET 5 and C# 9 as well as significant improvements in logging. This version is available for download on our website.

Support for .NET 5 and C# 9

We now fully support .NET 5. Additionally, we have tested PostSharp with C# 9 and made a few corrections to support new features like function pointers.

Logging

PostSharp 6.8 includes several improvements in logging:

Per-request logging

PostSharp Logging makes it very easy to create highly-detailed logs, but quite often too much is too much. Often, you need basic logging for 99.9% of your requests and super-detailed logging for 0.1%. And when your app runs in production, you don't want to redeploy it just to change the level of logging. This is now possible thanks to a file like this:

<logging><verbositylevel='warning'/><transactions><policytype='AspNetCoreRequest'if='t.Request.Path == "/"'sample='OnceEveryXSeconds(10, t.Request.Path)'><verbosity><sourcelevel='debug'/></verbosity></policy></transactions></logging>

You can store this file in a cloud drive and configure your application to reload it periodically. Read more in this blog post.

Distributed logging

Producing a highly detailed log of a distributed .NET application has become much simpler with PostSharp 6.8. With distributed application, it may be challenging to understand the execution logs unless you have the right logging settings and infrastructure in place. You can read here in details how you can now properly configure PostSharp Logging, Serilog and Elasticsearch for this scenario.

Usage measurement for per-usage licensing

Since 6.6 version, we have introduced per-usage licensing, a pricing model where you are not charged per daily unique active user but per amount of source code in which you use PostSharp. If you would like to use this licensing instead of the traditional per-developer licensing, it is now possible to know exactly how many lines of code you would be consuming with a per-usage subscription even if you don't have one yet. For details, see Per-Usage Licenses.

Summary

In PostSharp 6.8 we implemented support for C# 9 and .NET 5, and you can now use PostSharp safely with these new technologies. In addition, 6.8 includes several important improvements in logging such as per-request logging and distributed logging.

We would recommend upgrading to 6.8 now as in our latest announcement we warned about the possibility of PostSharp 6.5 – 6.7 failing your build after updating Visual Studio to version 16.8. Read more about the issue and fixes here.

As always, it is a good time to update your VS extension and NuGet packages and report any problem via our support forum.

Happy PostSharping!

Viewing all 419 articles
Browse latest View live