At Win Infoway, we are a nice group of multilingual developers. Different parts of Win Infoway are written in different languages and frameworks – whatever works best for the task.

Given the large amount of C # and the explosive growth of the data we are processing, some optimization work was necessary at different times. Most of the big wins come from really rethinking a problem and approaching it from a whole new angle.

Today, however, I wanted to share some C # performance tips that have helped me in my recent work. Some of them are pretty micro, so don’t just load and use everything here. With that, tip 1 is …

 

 

1. The higher the level, the slower the speed (usually)

It’s just a smell that I caught. The higher the level of abstraction you use, the slower it will often be. A common example here that I found is using LINQ when you are in a busy piece of code (maybe in a loop called millions of times). LINQ is great for quickly expressing something that might otherwise take a bunch of lines of code, but you often leave performance on the table.

Make no mistake – LINQ is great for letting you launch a working app. But in the performance-oriented parts of your code base, you can give too much. Especially since it is so easy to chain so many operations.

The specific example I had was when I used a .SelectMany (). Distinct (). Count (). Since it was called tens of millions of times (critical hot spot found by my profiler), it accumulated over a huge amount of uptime. I took another approach and reduced the execution time by several orders of magnitude.

 

2. Don’t underestimate release builds vs. debug builds

I was hacking and I was quite happy with the performance I was getting. Then I realized that I was doing all my tests in Visual Studio (I often write my performance tests to run them as unit tests, so I can more easily run only the part that interests me). We all know that the optimized versions are activated for the versions.

So I made a version, called the methods I was testing from a console application.

I had a good return with that. My code had been optimized like crazy by me, so it was really time for some of the micro-optimizations that the .NET JIT compiler shines. I gained about 30% more performance with the optimizations enabled! It reminds me of a story I read online a while ago.

This is an old game programming tale from the 90s – when memory limits were very tight. At the end of the development cycle, the team would run out of memory and start thinking about what should be deleted or downgraded to fit the small available memory footprint. The senior developer expected this, based on his experience, and had allocated 1 MB of memory with unwanted data at the very beginning of the project. He then saved the day and solved the problem by deleting the 1 MB of memory he had allocated from the start of the project!

Knowing that the team was still short on space, having free memory on site, gave the team what it needed and shipped on time.

Why do I share this? The same goes for performance: get something that works fairly well in debug mode and you’re about to get “free” performance in one version. Good time.

 

3. Look at the bigger picture

There are fantastic algorithms. Most of which you do not need day by day, or even month by month. However, it is worth knowing that they exist. Too often I find a much better approach to solving a problem after doing research. A developer doing research before coding is about as likely as a developer doing proper analysis before writing code. We love the code and always want to dive straight into the IDE.

Also, often when we look at performance issues, we focus too much on one line or method. This can be a mistake – an overview can help you improve performance much more significantly by reducing the work that needs to be done.

 

4. Relieve the pressure on the garbage collector

C#/.NET features garbage collection. Garbage collection is the process that determines which objects are currently obsolete and removing them to free space in memory. What that means is that in C#, unlike in languages like C++, you don’t have to manually take care of the removal of objects that are no longer useful, in order to claim their space in memory. Instead, the garbage collector (GC) handles all of that, so you don’t have to.

The problem is that there’s no free lunch. The collection process itself causes a performance penalty, so you don’t really want the GC to collect all the time. So how do you avoid that?

There are many useful techniques to avoid putting too much pressure on the GC. Here, I’ll focus on a single tip: avoid unnecessary allocations. What that means is to avoid things like this:

List<Product> products = new List<Product>();
products = productRepo.All();

The first line creates an instance of the list that’s completely useless since the very next line returns another instance and assign its reference to the variable. Now imagine the two lines above are inside a loop that executes thousands of times?

The code above might look like a silly example, but I’ve seen code like this in production—and not just a single time. Don’t focus on the example itself but on the general advice. Don’t create objects unless they’re really needed.

Due to the way the GC works in .NET (it’s a generational GC process), newer objects are more likely to be collected than old ones. That means that the creation of many new, short-lived objects might trigger the GC to run.

 

5. Don’t use empty destructors

The title says it all: don’t add empty destructors to your classes. An entry is added to the Finalize queue for each class with a destructor. Then our old friend GC is called to handle the queue when the destructor is called. An empty destroyer means it’s all for nothing.

Remember that running the GC is not cheap in terms of performance, as we have already mentioned. Do not make the GC work unnecessarily.

 

 

6. Avoid unnecessary boxing and unboxing

Boxing and unboxing are—like garbage collection—expensive processes, performance-wise. So, we want to avoid doing them unnecessarily. But what do they do in practice?

Boxing is like creating a reference type box and putting a value of a value type inside it. In other words, it consists of converting a value type to “object” or to an interface type this value type implements. Unboxing is the opposite—it opens the box and extracts the value type from inside it. Why is that a problem?

Well, as we’ve mentioned, boxing and unboxing are expensive processes in themselves. Besides that, when you box a value you create another object on the heap, which puts additional pressure on—you’ve guessed it!—the GC.

So, how to avoid boxing and unboxing?

In a general way, you can do that by avoiding older APIs in .NET (version 1.0) that predate generics and, as such, have to rely on using the object type. For instance, prefer generic collections such as System.Collections.Generic.List<T>, instead of something like System.Collections.ArrayList.

 

7. Beware of string concatenation

In C#/.NET, strings are immutable. So, every time you perform some operations that look like they’re changing a string, they’re creating a new one instead. Such operations include methods like Replace and Substring, but also concatenation.

So, the tip here is simple—beware of concatenating a large number of strings, especially inside a loop. In situations like this, use the System.Text.StringBuilder class, instead of using the “+” operator. That will ensure that new instances aren’t created for each part you concatenate.

 

8. Stay tuned to the evolution of C#

To conclude, we end with very general advice – stay tuned for how the C # language is changing and evolving. The C # team constantly offers new features that can have a positive impact on performance.

A recent example that we can mention is reference returns and local references, which were introduced in C # 7. These new features allow the developer to return by reference and store references in local variables. C # 7.2 introduced the Span type, which allows secure access to contiguous regions of memory.

New features and types like the ones above aren’t likely to be used by the majority of C # developers, but they can certainly have an impact on performance-critical applications and are worth exploring further.

Leave A Comment

Whatsapp Whatsapp Skype