Computers that learn from and repeat human behaviour save time and money, but what happens when they repeat flawed traits or errors thousands of times per second?
By the time you read these words, much of what has appeared on the screen of whatever device you are using has been dictated by a series of conditional instructions laid down in lines of code, whose weightings and outputs are dependent on your behaviour or characteristics.
We live in the Age of the Algorithm, where computer models save time, money and lives. Gone are the days when labyrinthine formulae were the exclusive domain of finance and the sciences - nonprofit organisations, sports teams and the emergency services are now among their beneficiaries. Even romance is no longer a statistics-free zone.
But the very feature that makes algorithms so valuable - their ability to replicate human decision-making in a fraction of the time - can be a double-edged sword. If the observed human behaviours that dictate how an algorithm transforms input into output are flawed, we risk setting in motion a vicious circle when we hand over responsibility to The Machine.