23 Jul 2025
On February 25, 2023, I made the initial commit to Quipu. I needed a
Nelder-Mead solver in .NET, and couldn’t find one, so I started writing my
own. Today, I am happy to announce version 1.0.0 of Quipu!
What does it do?
Quipu takes in a function, and searches for the arguments that minimize (or
maximize) the value of that function. This is a problem that arises in many
areas (curve fitting, machine learning, finance, optimization, …).
Let’s demonstrate on a simple example, rather than go into a lengthy
explanation. Imagine that we have a fictional factory, where we produce
Widgets:
- We sell Widgets for $12 per unit
- Producing a Widget costs $5 per unit
- Shipping widgets: the more Widgets we produce on a day, the further we have
to ship to reach customers and sell them. Shipping
n
Widgets costs us
$0.5 * n * n
. As a result, the total transportation cost increases rapidly.
Shipping 1 Widget would cost us half a dollar only, whereas 10 Widgets would
cost us $50 total.
We could represent this fictional model in C# like so:
public class ProfitModel
{
public static double ProductionCost(double volume)
{
return 5 * volume;
}
public static double TransportationCost(double volume)
{
return 0.5 * (volume * volume);
}
public static double Revenue(double volume)
{
return 12 * volume;
}
public static double Profit(double volume)
{
return
Revenue(volume)
- ProductionCost(volume)
- TransportationCost(volume);
}
}
How many widgets should we produce, if we wanted to maximize our daily profit?
Let’s ask Quipu:
using Quipu.CSharp;
var solverResult =
NelderMead
.Objective(ProfitModel.Profit)
.Maximize();
if (solverResult.HasSolution)
{
var solution = solverResult.Solution;
Console.WriteLine($"Solution: {solution.Status}");
var candidate = solution.Candidate;
var args = candidate.Arguments;
var value = candidate.Value;
Console.WriteLine($"Profit({args[0]:N3}) = {value:N3}");
}
The answer we get from Quipu is:
Solution: Optimal
Profit(7.000) = 24.500
More...
09 Jul 2025
I spent some time revisiting my solver library Quipu recently, looking in
particular at improving the user experience when the algorithm encounters
abnormal situations, that is, when the objective function could throw an
exception. This in turn got me wondering about the performance cost of using
try ... catch
blocks, when the code does not throw any exception.
Based on a quick internet search, the general wisdom seems to be that the cost
is minimal. However, Quipu runs as a loop, evaluating the same function over
and over again, so I was interested in quantifying how minimal that impact
actually is.
For clarity, I am not interested in the case where an exception is thrown.
Handling an exception IS expensive. What I am after here is the cost of
just adding a try ... catch
block around a well-behaved function.
So let’s check that out!
More...
02 Jul 2025
For many reasons, I am not a fan of the current hype around Large Language
Models (LLMs). However, a few months ago, I was asked to work on a project to
evaluate using LLMs for a practical use case. I figured this would be an
interesting opportunity to see by myself what worked and what didn’t, and
perhaps even change my mind on the overall usefulness of LLMs.
In this post, I will go over some of the things I found interesting.
Caveat: I have a decent knowledge of Machine Learning, but this was my first
foray into LLMs. As a result, this post should not be taken as competent
advice on the topic. It is intended as a beginners’ first impressions.
Context
The client - let’s call them ACME Corp - produces and distributes many products
all over the world. Plenty of useful information about these products, such as
inventory or shipments, are available in a database. Unfortunately, most
employees at ACME Corp have neither access nor a good enough grasp of SQL (or
of the database itself) to make use of that information.
The thought then was to explore if, by using LLMs, we could give users a way to
access that information, in their own language (“what is the current inventory
of sprockets model 12345 in Timbuktu”), without the hurdle of writing complex
SQL queries. And, because ACME Corp is international, “in their own language”
is meant quite literally: the question could be asked in English, as well as in
a wide range of other languages.
At a high level, we want something like this:

Given the time budget on the project, we did not have the option to fine-tune a
model for our domain, and used a “stock” LLM.
More...
11 Jun 2025
In my previous post, I went over fitting the parameters of a Log-Normal
distribution to a sample of observations, using Maximum Likelihood Estimation
(MLE) and Quipu, my Nelder-Mead solver. MLE was overkill for the example I
used, but today I want to illustrate some more interesting things you could do
with MLE, building up from the same base setup.
Let’s do a quick recap first. I will be using the following libraries:
#r "nuget: MathNet.Numerics, 5.0.0"
#r "nuget: MathNet.Numerics.FSharp, 5.0.0"
#r "nuget: Plotly.NET, 5.0.0"
#r "nuget: Quipu, 0.5.2"
Our starting point is a sample of 100 independent observations, generated by a
Log-Normal distribution with parameters Mu=1.3
and Sigma=0.3
(which
describe the shape of the distribution), like so:
open MathNet.Numerics.Random
open MathNet.Numerics.Distributions
let mu, sigma = 1.3, 0.3
let rng = MersenneTwister 42
let duration = LogNormal(mu, sigma, rng)
let sample =
duration.Samples()
|> Seq.take 100
|> Array.ofSeq

If we want to find a distribution that fits the data, we need a way to compare
how well 2 distributions fit the data. The Maximum Likelihood function does
just that: it measures how likely it is that a particular distribution could
have generated a sample - the higher the number, the higher the likelihood:
let logLikelihood sample distributionDensity =
sample
|> Array.sumBy (fun observation ->
observation
|> distributionDensity
|> log
)
More...
28 May 2025
Back in 2022, I wrote a post around using
Maximum Likelihood Estimation with DiffSharp to analyze the reliability of
a production system. Around the same time, I also started developing - and
blogging about - Quipu, my F# implementation of the Nelder-Mead algorithm.
The two topics are related. Using gradient descent with DiffSharp worked fine,
but wasn’t ideal. For my purposes, it was too slow, and the gradient approach
was a little overly complex. This led me to investigate if perhaps a
simpler maximization technique like Nelder-Mead would do the job, which in turn
led me to develop Quipu.
Fast forward to today: while Quipu is still in pre-release, its core is fairly
solid now, so I figured I would revisit the problem, and demonstrate how you
could go about using Quipu on a Maximum Likelihood Estimation (or MLE in short)
problem.
In this post, we will begin with a simple problem first, to set the stage. In
the next installment, we will dive into a more complex case, to illustrate why
MLE can be such a powerful technique.
The setup
Imagine that you have a dataset, recording when a piece of equipment
experienced failures. You are interested perhaps in simulating that piece of
equipment, and therefore want to model the time elapsed between failures. As a
starting point, you plot the data as a histogram, and observe something like
this:

It looks like observations fall in between 0 and 8, with a peak around 3.
What we would like to do is estimate a distribution that fits the data. Given
the shape we are observing, a LogNormal distribution is a plausible
candidate. It takes only positive values, which we would expect for durations,
and its density climbs to a peak, and then decreases slowly, which is what we
observe here.
More...