More SIMD vectors exploration

Since my earlier post looking into SIMD vectors in .NET, I attempted a few more experiments, trying to understand better where they might be a good fit, and where they would not.

The short version: at that point, my sense is that SIMD vectors can be very handy for some specific scenarios, but would require quite a bit of work to be usable in the way I was hoping to use them. This statement is by no means meant as a negative on SIMD; rather, it reflects my realization that a SIMD vector is quite different from a mathematical vector.

All that follows should also be taken with a big grain of salt. SIMD is entirely new to me, and I might not be using it right. While on that topic, I also want to thank @xoofx for his very helpful comments - much appreciated!

Anyways, with these caveats out of the way, let’s dive into it.

My premise approaching SIMD was along these lines: “I write a lot of code that involves vectors. Surely, the Vector class should be a good fit to speed up some of that code”.

To explore that idea, I attempted to write a few classic vector operations I often need, both in plain old F# and using SIMD vectors, trying to benchmark how much performance I would gain. In this post, I’ll share some of the results, and what I learnt in the process.

You can find the whole code here on GitHub.

More...

Should I use dotnet SIMD Vectors?

Even though a lot of my work involves writing computation-heavy code, I have not been paying close attention to the System.Numerics namespace, mainly because I am lazy and working with plain old arrays of floats has been good enough for my purposes.

This post is intended as a first dive into the question “should I care about .NET SIMD-accelerated types”. More specifically, I am interested in understanding better Vector<T>, and in where I should use it instead of basic arrays for vector operations, a staple of machine learning.

Spoiler alert: some of my initial results surprised me, and I don’t understand yet what drives the speed differences between certain examples. My intent here is to share what I found so far, which I found interesting enough to warrant further exploration later on.

Anyways, let’s dive in. As a starting point, I decided to start with a very common operation in Machine Learning, computing the Euclidean distance between 2 vectors.

You can find the whole code here on GitHub.

More...

Quipu 0.2.2, a Nelder-Mead solver with a side of NaN

The main reason I created Quipu is that I needed a Nelder-Mead solver for a real-world project. And, as I put Quipu through its paces on real-world data, I ran into some issues, revolving around “Not a Number” floating point values, aka NaN.

tl;dr: the latest release of Quipu, version 0.2.2, available on nuget, should handle NaN values decently well, and has some minor performance improvements, too.

In this post, I will go over some of the changes I made, and why. Fixing the main issue made me realize that I didn’t know floating point numbers as well as I thought, even though I have been using them every working day for years. I will take some tangents to discuss some of my learnings.

So let’s dig in. My goal with Quipu was to implement the Nelder-Mead algorithm in F#. The purpose of Nelder-Mead is to find a numeric approximation of values that minimize a function. As an illustration, if we took the function $f(x) = x ^ 2$, we would like to know what value of $x$ produces the smallest possible value for $f(x)$, which happens to be $x = 0$ in this case.

Quipu handles that case just fine:

open Quipu.NelderMead

let f x = x ** 2.0
f
|> NelderMead.minimize
|> NelderMead.solve

val it: Solution = Optimal (0.0, [|0.0|])

So far, so good. Now, what about a function like $f(x) = \sqrt{x}$?

That function is interesting for 2 reasons:

  • $f(x)$ has a minimum, for $x = 0$.
  • $f(x)$ is defined only for $x \ge 0$: it is a partial function.

Sadly, the previous version of Quipu, version 0.2.1, failed to find it. It would go into an infinite loop instead.

More...

Network maximum flow in F#

An old math problem I had not seen since my university days resurfaced the other day, the Maximum Flow problem. It came up in the context of analyzing some industrial process. For illustration purposes, let’s say we are producing sausages, following these steps: we grind some meat, add some seasoning, then stuff and tie the sausage casings, and split them into delicious sausage links.

We could represent this process as a graph, like so:

mermaid diagram 1: linear process

``` mermaid
graph TD;
    grind_meat --> season_meat;
    season_meat --> stuff_casing;
    stuff_casing --> cut_links;

Now the question is, how many sausages per minute could we produce?

Assuming each operation is performed by a separate person, this is not very complicated. We are going to be as slow as the slowest link. So if for instance we could

  • grind meat for 20 sausages / minute,
  • season meat for 15 sausages per minute,
  • stuff 5 sausages per minute, and
  • cut 20 sausages per minute,

we would end up running at 5 sausages / minute at best, the bottleneck being stuffing.

Now we might be able to get a better throughput with some parallelization. For instance, we could organize production like so:

mermaid diagram 2: complex process

``` mermaid
graph TD;
    grind_meat --> season_meat_1;
    grind_meat --> season_meat_2;
    season_meat_1 --> stuff_casing1;
    season_meat_1 --> stuff_casing2;
    season_meat_2 --> stuff_casing2;
    season_meat_2 --> stuff_casing3;
    stuff_casing1 --> cut_links_1;
    stuff_casing2 --> cut_links_1;
    stuff_casing3 --> cut_links_1;

This is still not overly complicated, but it is beginning to be hairy, and you can imagine how with a few more processes added, the question “how much work can I process through this network” will soon become impractical to handle by hand.

This is essentially what the Maximum Flow problem is about. Given a directed graph, with capacity limitations, how much throughput (flow) can we achieve? In the rest of this post, I’ll go through one way you could answer that question, using Linear Programming.

More...

Study notes: constraints in Quipu Nelder-Mead solver

In my previous post, I went over the recent changes I made to my F# Nelder-Mead solver, Quipu. In this post, I want to explore how I could go about handling constraints in Quipu.

First, what do I mean by constraints? In its basic form, the solver takes a function, and attempts to find the set of inputs that minimizes that function. Lifting the example from the previous post, you may want to know what values of $(x,y)$ produce the smallest value for $f(x,y)=(x-10)^2+(y+5)^2$. The solution happens to be $(10,-5)$, and Quipu solves that without issues:

#r "nuget: Quipu, 0.2.0"
open Quipu.NelderMead

let f (x, y) = (x - 10.0) ** 2.0 + (y + 5.0) ** 2.0
NelderMead.minimize f
|> NelderMead.solve

val it: Solution = Optimal (2.467079917e-07, [|9.999611886; -4.999690039|])

However, in many situations, not every value will do. There might be restrictions on what values are valid, such as “x must be positive”, or “y must be less than 2”. These are known as constraints, and typically result in an inequality constraint, in our case something like $g(x,y) \leq 0$. How could we go about handling such constraints in our solver?

More...