Designing the Invisible - How algorithms became our new material

From data to algorithms.

I used to believe that we design the relationship between machines and people. People do a and machines do b and the other way around. But during my work over the years for companies and my research for my study focusing on data, I realized that this binary thinking is coming to an end. That data lays a ground for something new, a new species and its behaviors. These are (less) bound to the shell they embody and will have way more factors for considerations.

It’s not about designing the communication between humans and machines.
 It’s about shaping the behaviour of the algorithm and its dialogue with people.

And that’s where things get interesting – and complex.

The new material has influence.

Before the rise of human computer interaction, Traditional materials were about friction. Weight. Limits. They respond to pressure in a way you can learn to feel, and in their areas they still are. But algorithms don’t behave like that. They behave within systems. They are shaped by context, by (training) data, by the intentions and believe of people who created them, by the way people use them – and re-shaped by the consequences of their own predictions.

It’s a different kind of material. One that learns and co exists. One that feeds on the world and mirrors it back to us, sometimes distorted, biased, invisibly influential.

If you’re a designer, you’re not just deciding how it looks. You’re influencing what gets seen. By whom. When. And why.

Data is not neutral. It’s directional.

There’s a common misconception that “data speaks for itself.” But as I’ve learned (sometimes the hard way), data never comes without context. It reflects decisions about what to measure, what to leave out, and how to interpret meaning. The problem is: once something becomes a metric, it feels objective. Despite or focus on the data literacy on its audience.

But the reality is: data is not the truth – it’s a trail. A trace of human behavior, choices, conditions. And when we use data to train algorithms, we’re not just reflecting the world – we’re encoding a version of it.

Designing with data means navigating this uncertainty. Being aware of proxy signals, structural gaps, and the trade-offs we’re making every time we prioritize one outcome over another. Our responsibility is increasing rapidly, becoming more complex. This happens simultaneously with the parameters and dependencies of our material.

Trust doesn’t come from interfaces. It comes from expectations.

Through my work, exchange with colleagues, and my study, I put forward the hypothesis that these systems are hyped by two forces:  Mechanical trust (is it reliable?) and social trust (does it behave the way I expect?). Why both? Because these systems mirror their human creators while interacting with other humans – they are both product and participant.

That’s exactly what makes algorithms a design problem. Because the behaviour of a system depends not just on its internal logic, but on the situation users of the system are in, what they expect, and how outcomes and the process are communicated.

Trust isn’t a feature. It’s the result of expectations aligning with experience – over time, in context. As designers, we can’t guarantee trust. But we can design for trustworthiness. That means shaping feedback loops, exposing the logic of the system, advocating for the intent not going overboard in the building process and building in mechanisms for correction – not just polish.

In an age of data fatigue and rising expectations, trust isn’t soft – it’s strategic.

Why good systems make good business.

Designing algorithmic systems isn’t just about ethics or aesthetics – it’s about business resilience and performance.

Poorly designed algorithms don’t just fail silently – they erode trust, invite regulatory scrutiny, and damage reputations. Design, when understood as the shaping of system behaviour, becomes a form of risk management. It allows teams to identify and mitigate unintended consequences before they become costly.

How can you measure it? With trust. Systems that are experienced as transparent and fair create stronger user relationships, reduce churn, and drive sustained engagement. In an age of data fatigue and rising expectations, trust isn’t soft – it’s strategic.

This only works if design moves beyond the interface. The logic, data flows, and adaptation mechanisms of algorithmic systems all encode behaviour. Embedding design here means shaping what the system does, not just how it looks.

So what does it mean to »design an algorithm«?

t means asking uncomfortable questions:

  • Who benefits when this prediction is right?
  • Who is excluded when this pattern holds?
  • What happens when the system is wrong – and who bears the cost?

It means not treating the algorithm as a black box to decorate, but as a material to be shaped – intentionally.

It also means holding space for uncertainty. Because real-world situations are messy. People are messy. The perfect Double Diamond rarely plays out in real-world. And if our design frameworks can’t enable people to deal with that messiness, we risk making things look clean while staying fundamentally unbalanced.

A more human-centred future starts behind the interface.

We often talk about human-centred design in terms of usability, flow, or accessibility. But taking it seriously means shifting the focus — from screens to systems, from aesthetics to consequences.

Because algorithms aren’t just technical tools. They’re social actors. They govern access, visibility, trust. And if we treat them as design material, we cannot just make them more usable – we can make them more balanced and the business in the long-run.