What is your program all about?

This software is part of a series of artistic projects to imagine and facilitate improved realities. The program uses a couple of machine learning techniques to identify vehicles and then to paint in what it thinks is likely to be behind them, frame by frame, effectively removing vehicles from video footage.

What prompted you to create it?

The idea for this project came from walking down Devils Tower Road – which often coincides with an uncomfortable assault of construction dust blown into Gibraltar across that huge dirt mound and an oppressive cacophony of lorries and cars.

I started imagining what an environment better suited to human living would be like: an urban environment that is greener and more pedestrian. There are many initiatives across other countries towards this shared vision and I think Gib could really benefit from it too.

What are your plans for future development, if any?

Future development originally included an augmented reality app to allow anyone to see this vision in real time. Following the unexpected success and reach of the original video snippet though, I feel the message has already been received. I would still like to exhibit this somewhere in a high-resolution, real-time display of a busy pedestrian/vehicle area, but that would require performance enhancements of the original software and expensive hardware.

What effect do you think this type of technology could have on high-traffic areas such as Gibraltar?

This technology as it is stands, is only as an imagination of what a better future might be like. It’s up to our own policy decisions and community action to reclaim our urban spaces for better quality living.

In what other ways do you foresee AI taking more prominent roles within our society?

On the topic of AI in general and its role in our society, it’s a tool like any other, with the potential for positive and negative use.

And what issues could potentially arise from the implementation of such roles?

The potential issues around AI often come more out of technology-oriented solutionism, rather than intentional misuse.

Machine learning techniques often rely on huge amounts of human-produced training data and heuristics. As we humans are still affected by a large amount of cognitive biases including cultural and gender stereotypes, the information we produce often suffers from the same. This results in AI algorithms inheriting the biases of our human majority if we’re not careful to manage the information we train these systems on.

“I think Gib could really benefit from it.”

We can also often be too quick to look for technological solutions to human problems, in a way that potentially negates the positive aspects of our human experience. Some big companies invested in AI technologies are starting to become aware of these issues and are increasingly respectful of the human in the human-computer-interaction – communicating to the human user things like the confidence of the AI prediction, the program’s specific limitations, offering manual overrides and alternative suggestions. The worst experience is a ‘computer says no’ without any explanation or alternative. (See Google’s PAIR: People+AI Research.)

Unfortunately, the default from many AI-driven experiences today still treat us more like a servant rather than a master, from the suggested products of Amazon, to the echo chambers of Facebook.

What else do you have in the pipeline?

I’m working on a few different projects that hope to imagine and facilitate more positive experiences, including apps to encourage moments of shared human attention (as opposed to the isolation of social media consumerism), technology to better care for our climate, apps to encourage personal development, and art focused on all aspects of the human experience. Aside from these personal projects and collaborations, I’m starting a creative studio offering existing businesses AI development skills towards socially positive solutions.