Refactoring Using Cognitive Complexity
Measuring complexity and refactoring to reduce it.
This post was originally written for the AWH Insights Blog. You can read the original post there.
Refactoring code is an essential part of the software development process. Sometimes though, it鈥檚 difficult to determine what should be refactored and why. One measure for determining what is in need of some refactoring is Cognitive Complexity.
Cognitive Complexity is a metric developed by SonarSource, the makers of a code quality tool we use at AWH called SonarQube. SonarQube hooks into a CI pipeline for a project and performs code quality analysis on it. The types of issues that SonarQube can find range from simple things (this variable is never used, remove it) to more complex refactoring (this method is too complex, refactor it). To determine if a method is too complex and should be refactored, a Cognitive Complexity score is calculated.
Defining Complexity
Complexity in programming and computer science has a few meanings. Most programmers may be familiar with Cyclomatic Complexity which attempts to tell us how difficult a method will be to test or maintain. It does this by creating a score for every branch in a method. Like so:
public static string HowMany(int n) // +1 (at least one path through each method)
{
switch (n)
{
case 1: // +1
return "One";
case 2: // +1
return "A couple";
case 3: // +1
return "A few";
case 4: // +1
return "Several";
default:
return "Many";
}
// Cyclomatic Complexity = 5
}
While Cyclomatic Complexity does a good job of telling us how complex a method is when it comes to testing (the score is generally the minimum number of tests you will need to cover a method), it doesn鈥檛 give us a good indication of how maintainable or understandable the code is. Cognitive Complexity builds on and adapts Cyclomatic Complexity to give us a better score that relates more to understanding and maintainability rather than testing.
Note that Cognitive Complexity is not specific to any programming language or style. All the examples in this post use C#.
Calculating Complexity
The first thing you should know about calculating Cognitive Complexity is that you don鈥檛 need to do it yourself. SonarQube will analyze your code as part of a CI/CD pipeline and is a great tool for more than just Cognitive Complexity analysis. But learning how the score is calculated can be helpful in structuring your code in more maintainable ways.
This post is only going to give the basics of calculating Cognitive Complexity. For the full details, check out SonarSource鈥檚 white paper.
Here are the basic rules for calculating a Cognitive Complexity score:
- Increment the score for every break in the control structure of the code.
- Increment the score again for control structure breaks with each level of nesting.
- Ignore shorthand constructs that aid readability.
Let鈥檚 break those down some more.
Breaks in Control Structure
This is the most basic rule of Cyclomatic Complexity. Loops and conditionals increment the score for each break in the flow. There are some caveats, though.
try
andfinally
blocks are ignored, butcatch
will increment the score, though only once no matter how many types of exceptions are caught.- A
switch
will only increment the score by 1, regardless of how many cases are handled. This is different from Cyclomatic Complexity and is done because a switch is generally easier to scan and read than a group of equivalentif/else if/else
statements. - Sequences of logical operators are grouped together. Consider the difference between
a && b && c && d
anda && b || c && d
. The first would increment by 1 where as the second would increment by 3 since it is a lot more difficult to read. - Recursion is considered to be a kind of loop and increments the score.
- Jumps (
break
,continue
, etc) will increment the score, but an earlyreturn
will not.
Because of that rule about
switch
statements, theHowMany
method above only has a Cognitive Complexity score of 1.
Nesting Control Structures
Excessive nesting is a very nasty code smell. Cognitive Complexity calls this out by incrementing each control structure again for each level it is nested. Jumps are not subject to additional increments for nesting, though.
public string DoSomething(string str)
{
var newStr = string.Empty;
if (str == null) // +1
{
for (var i == 0; i < str.Length; i++) // +1 (+1 for nesting)
{
if (str[i] == 'a') // +1 (+2 for nesting)
newStr += 'A';
else if (str[i] == 'x') // +1 (+2 for nesting)
continue; // +1
else
newStr = str[i];
}
}
return newStr;
}
// Cognitive Complexity = 10
Ignore Shorthand Constructs
Cognitive Complexity scores are meant to reward good coding practices and thus shorthand constructs and statements are not counted against the score. Most notably, things like null-coalescing operators are great ways to use the language to reduce complexity, as in the following example:
// Not good, increments the score
var myString == string.Empty;
if (someObj == null)
myString = someObj.StringProperty;
// Good, does not increment the score
var myString = someObj?.StringProperty ?? string.Empty;
Examples
Let鈥檚 use these rules and refactor something. Using what we know, we can identify some easy ways to reduce complexity and make a method easier to read and maintain. Let鈥檚 start with a big, nasty method and identify the areas with the highest complexity:
public async Task ParseAsync(string startingUrl, bool recurse = false, bool verbose = false)
{
startingUrl = CleanUrl(startingUrl);
_rootUrl = new Uri(startingUrl).GetLeftPart(UriPartial.Authority);
var urls = new Dictionary<string, int>();
urls.Add(startingUrl, 0);
while (urls.Any(x => x.Value == 0))
{
// Grab any URLs from the dictionary that we have not checked. If we are recursing,
// new URLs will be added with a status code of 0 and we will pick them up on the
// next pass.
var urlsToProcess = urls.Where(x => x.Value == 0).Select(x => x.Key).ToList();
foreach (var url in urlsToProcess)
{
var displayUrl = url.Length > (Console.BufferWidth - 10)
? $"{url.Substring(0, Console.BufferWidth - 10)}..."
: url;
Console.Write(displayUrl);
HttpResponseMessage response;
try
{
response = await _httpClient.GetAsync(url);
}
catch (HttpRequestException ex)
{
urls[url] = -1;
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine($" [{ex.Message}]");
Console.ResetColor();
continue;
}
urls[url] = (int)response.StatusCode;
if (response.IsSuccessStatusCode)
{
if (verbose)
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine($" [{(int)response.StatusCode}]");
Console.ResetColor();
}
else
{
// Clear the current line
Console.SetCursorPosition(0, Console.CursorTop);
Console.Write(new string(' ', Console.BufferWidth - 1));
Console.SetCursorPosition(0, Console.CursorTop);
}
}
else
{
// Write the error code and exit the loop early
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine($" [{(int)response.StatusCode}]");
Console.ResetColor();
continue;
}
// Exit early if we are not recursing unless we are checking the starting URL
if (!recurse && url != startingUrl)
continue;
// Exit early if the URL is external or if the content is not HTML
if (!IsInternalUrl(url) || !response.Content.Headers.ContentType.MediaType.StartsWith("text/html"))
continue;
try
{
// If we made it this far, we will parse the HTML for links
var html = await response.Content.ReadAsStringAsync();
// We add each link to the dictionary with a status code of 0. Since
// we are using a dictionary, URLs that are already checked or slated
// to be checked will be ignored.
GetUrlsFromHtml(html).ForEach(x => urls.TryAdd(x, 0));
}
catch
{
Console.WriteLine(url);
Console.WriteLine($"\tUnable to parse HTML.");
}
}
}
}
Some context on what is happening would be nice, right? This is the main method from a small side project I made called Linky. It鈥檚 a command line app to scan websites and attempt to identify broken links. I鈥檝e left out some helper methods here so we can focus on the big one that could use some refactoring.
Let鈥檚 try to identify some issues before we start. First, there is up to 4 levels of nesting. That鈥檚 pretty bad and should probably be our main target. There鈥檚 also some boiler-plate code around outputting to the console that could be abstracted away. This won鈥檛 help our complexity score all that much, but would make the method a lot easier to read.
I calculate a Cognitive Complexity score of 31. SonarQube recommends that methods not exceed 15.
What changes can be made to reduce that score? I would first extract the big foreach
loop contents as a new method. That will reduce the nesting of everything inside that loop by 2, which would be a huge win. Then I would extract the various console outputs to some new methods. In some cases this can extract some control flow breaks, further reducing nesting.
Let鈥檚 see how it all shapes up:
public async Task ParseAsync(string startingUrl, bool recurse = false, bool verbose = false)
{
_startingUrl = CleanUrl(startingUrl);
_rootUrl = new Uri(_startingUrl).GetLeftPart(UriPartial.Authority);
_recurse = recurse;
_verbose = verbose;
var urls = new Dictionary<string, int>();
urls.Add(_startingUrl, 0);
while (urls.Any(x => x.Value == 0))
{
// Grab any URLs from the dictionary that we have not checked. If we are recursing,
// new URLs will be added with a status code of 0 and we will pick them up on the
// next pass.
var urlsToProcess = urls.Where(x => x.Value == 0).Select(x => x.Key).ToList();
foreach (var url in urlsToProcess)
await ProcessUrlAsync(url, urls);
}
}
private async Task ProcessUrlAsync(string url, Dictionary<string, int> urls)
{
DisplayUrl(url);
HttpResponseMessage response;
try
{
response = await _httpClient.GetAsync(url);
}
catch (HttpRequestException ex)
{
urls[url] = -1;
DisplayErrorCode(ex.Message);
return;
}
urls[url] = (int)response.StatusCode;
if (response.IsSuccessStatusCode)
DisplaySuccessCode(urls[url]);
else
{
// Write the error code and exit the loop early
DisplayErrorCode(urls[url].ToString());
return;
}
// Exit early if we are not recursing unless we are checking the starting URL
if (!_recurse && url != _startingUrl)
return;
// Exit early if the URL is external or if the content is not HTML
if (!IsInternalUrl(url) || !response.Content.Headers.ContentType.MediaType.StartsWith("text/html"))
return;
try
{
// If we made it this far, we will parse the HTML for links
var html = await response.Content.ReadAsStringAsync();
// We add each link to the dictionary with a status code of 0. Since
// we are using a dictionary, URLs that are already checked or slated
// to be checked will be ignored.
GetUrlsFromHtml(html).ForEach(x => urls.TryAdd(x, 0));
}
catch
{
Console.WriteLine(url);
Console.WriteLine($"\tUnable to parse HTML.");
}
}
private void DisplayUrl(string url)
{
var displayUrl = url.Length > (Console.BufferWidth - 10)
? $"{url.Substring(0, Console.BufferWidth - 10)}..."
: url;
Console.Write(displayUrl);
}
private void DisplaySuccessCode(int statusCode)
{
if (_verbose)
{
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine($" [{statusCode}]");
Console.ResetColor();
}
else
{
// Clear the current line
Console.SetCursorPosition(0, Console.CursorTop);
Console.Write(new string(' ', Console.BufferWidth - 1));
Console.SetCursorPosition(0, Console.CursorTop);
}
}
private void DisplayErrorCode(string errorCode)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine($" [{errorCode}]");
Console.ResetColor();
}
How did we do?
- The main method,
ParseAsync
now has a score of 3. 馃帀 ProcessUrlAsync
is now the biggest method, but since it鈥檚 not nested in two loops, the score is only 7. 馃帀馃帀DisplayUrl
andDisplaySuccessCode
have a score of 1 each.DisplayErrorCode
has a score of 0.- Some parameters are now assigned to private fields. This has no affect on the score, but keeps the code more readable since we don鈥檛 need to pass them around as much.
Functionally, the code is the same as it executes and outputs exactly as it did before. The only difference is how we structured what we are doing. We created smaller and more focused methods that can be more easily understood and thus more easily maintained. Cognitive Complexity helped us identify where the most complex parts of our method were so we could refactor where it is needed the most.
Photo by Andrew Seaman on Unsplash