# SharpNEAT 2.4.0

## Overview

This version contains the following changes:

• HyperNEAT / CPPNs

• Fix: CPPN network outputs were bounded to the interval [0,1], but we require the outputs to be unbounded because they describe network weights in the networks created by the CPPN network.
• NEAT

• Fix: Possible bias away from selecting the last node (usually a hidden node unless there are none) for 'add node' mutations.
• Activation functions

• New activation functions: LeakyReLU, LeakyReLUShifted, LogisticApproximantSteep, ScaledELU.
• SReLUShifted performance tuning.

• .NET Framework / dependencies

• SharpNeatLib now targets .NET Standard 2.0.
• All other libs upgraded to target .NET 4.7.1
• Upgraded Redzen nuget to version 4.0 (now a .NET Standard 2.0 assembly).
• Upgraded Visual Studio project files (*.csproj) to new leaner format.
• Various other minor fixes and maintenance; see git history.

## Efficacy Sampling Tests

Efficacy sampling was performed on the two standard benchmark tasks and the results compared between this and the previous version (version 2.3.1). The resulting best fitness histograms are provided below. To recap, these histograms show the best fitness achieved on each of a large number of independent SharpNEAT runs, each of which terminates after one minute of execution (clock-time). Histograms are also provided comparing the evaluation counts achieved in each 60 second run.

## Discussion

No major changes have been made between SharpNEAT v2.3.1 and v2.4, and therefore the efficacy sampling results were expected to be similar for these two versions. However, the two sets of results are clearly not a close match, and this is thought to be due to two factors.

Firstly, the two efficacy sampling runs were performed on two differnt versions of the .NET Framework. The SharpNEAT v2.3.1 run was performed using framework 4.6.1, and the SharpNEAT v2.4 run was run using framework 4.7.1; this is a significant difference because although these two framework versions report using the same CLR version (CLR 4.0.30319.42000), the CLR does not included the JIT compiler, framework classes or garbage collector, and these are all reported to have had numerous performance improvements between 4.6.1 and 4.7.1, e.g. see:

The second factor is a defect that was fixed between v2.3.1 and v2.4 in changeset 2d5c0a1. The neural net activation function used in both sets of results was supposed to be LeakyReLU, but the v2.3.1 results used SReLU, a function with a different shape and a slower execution time. This is very likely the cause of the dramatic fitness improvement on the sinewave task, specifically the emergence of a peak at the far right of the distribution, representing fitness scores approaching or at the maximum score of 1000 on that task.

To be clear, officially the v2.3.1 release did not include the LeakyReLU activation function, but it was tested with that function as part of A Review of Activation Functions in SharpNEAT, and it is those v2.3.1 results that are being shown in the above efficacy sampling plots.

### Appendix 1: Test Platform Environment Details

OS Name: Microsoft Windows 10 Home SP0.0
Architecture: 64-bit
.NET Framework: 4.7.1 (CLR 4.0.30319.42000)

CPU
Brand: GenuineIntel
Name: Intel Core i7-6700T CPU @ 2.80GHz
Architecture: x64
Cores: 4
Frequency: 2808
RAM: 16 GB


### Appendix 2: Resources

##### Histogram CSV data

Colin,
February 10th, 2018