top of page
  • Photo du rédacteurchahaleden

Dernière mise à jour : 15 févr. 2022

“My boss is an algorithm” (Mon patron est un algorithme) is a documentary of investigative journalism produced by Premiere Ligne productions released in September 2019 in France.

It describes a few evolutions in work, where people are no longer employed by humans but directed by certain settings and patterns, with a big share of artificial intelligence involved.

One specific area of application of these algorithms struck me and led me to rethink my subject of research to incorporate it within. It caught my attention regarding the ethics of the use of trained models of machine learning, which is usually pointed out for issues of biases, or data collection. What I would want to focus on is their use and breeding of a new proletariat.

Machines are not able to train algorithms of AI, for all tasks involving labeling, it is human forces who are performing them. They are not employed by any of the companies who offer this service, Amazon is the leader, Figure 8, Microsoft, has contributors all around the world, connecting to their websites to perform micro-tasks, for a few cents an hour. Without the millions of invisible workers, mostly isolated and in precarious situations, there wouldn’t be any Artificial Intelligence. It could be that these companies are not generating poverty, and might argue that people are connecting in full freedom to their websites, but they are building on a fault, they exist and prosper thanks to human forces.

This vision is far from the glamourous nomadic societies, theorised in the last century, and put forward when it comes to the ones for whom it is a choice of comfort. Networks take advantage of giving access to people who are in remote areas.

The tasks themselves are somehow dehumanising, repetitive, and require you to abstract yourself from your own judgments to the benefit of precise charts defined by tech giants. They are the ones shaping the social values that will be enforced.

This subject has led to very little research and awareness when it raises fundamental.

Notes on the documentary:

Olivier Bousquet, responsible for AI at Google Europe, shows the use of Machine Learning through AutoML, a Google-powered data training tool.

Figure 8 is a company specialised in manual labelling of data sets. It is not unique, Jeff Bezos invented the concept with Amazon Mechanical Turk, as long as Microsoft to name the biggest ones, all using the services of “click workers”. Lukas Biewald founded Figure 8 in 2007, and sold it in 2019 for 300 million dollars to the company Appen.

At the Commonwealth Club conference on the 3rd of March 2010, he stated his vision of employment at the age of the internet and a part of his financial prosperity:

“With new technologies, you can hire someone very easily, make them work for 10 minutes, pay them a misery, and toss them away when you don’t need them anymore”.

According to him, 100.000 people consistently contribute to the datasets, by regularly performing microtasks, around the US and in the world, and millions connect and contribute on a more sporadic basis.

When challenged about the conditions of work of these people, he stops the interview. His company has fed all of the world's biggest companies, ranging from Google to American Express, Mcdonalds, Samsung, too many to mention, and too many to put on the “wall of fame” of his offices.

As he describes it, Figure 8 takes its name from the supposed loop between people and technologies, working together towards a brighter future.

The journalists managed to meet with a couple of these contributors, after months of research.

  • Jared Mansfield is an occasional contributor to figure 8, he has a job selling chicken at a supermarket for a salary of 1500 dollars/month. He connects to figure 8 to complete his revenues. They show him work for 30 minutes, his task is to label products, teaching the machine what item is a “pasta with cheese” and which are not. In half an hour he is able to answer 18 questions, making 15 cents. That’s 30 cents of dollars an hour.

  • The second contributor is Dawn Carbone, a 46 years old single mother of three living in subsidised housing in the region of Maine US. One of her children has autism, which requires her mother to be present when she gets home from school at 15h, and to be able to come to get her when needed. The region has few job opportunities and is in difficult climatic situations with regular snow events.

She has been working for figure8 for 3 years full time, 8 hours a day, five days a week.

On good days she makes 5 dollars an hour, other times it's a few cents, for an average of 250 dollars a month.

  • Christy Milland is an Amazon Mechanical Turk contributor. What she explains concerns tasks she was performing for Google in 2017. She explains how she was asked to work on aerial images, mostly shot by night in the desert, with mainly cars and people walking around, and she would need to indicate where she thought the people would be heading towards, where they would be a few seconds after. She first believed it was for an application geared towards transportation, before realising it could only be a drone, for whom she was indicating where to shoot.

After protests from their office workers, Google withdrew from that project, launched by the US Army in 2017 under the name “Project MAVEN”.

Jeanine Berg is a specialist of the question of web workers at the International Labour Organisation, where they seem worried about this phenomenon.

She explains how globalisation has brought a global workforce in the industrial sector. That would be another step, one that concerns services, reduced to microtasks that can be performed from anywhere. All the workers of the world are put in competition, which allows them to lower their salaries. According to their report, these workers are paid 3,55 dollars an hour on average.

Another type of web workers are what Facebook calls “content reviewers”. They are never hired directly by Facebook, who would rather subcontract it to others like the multinationals Accenture, Majorelle, Cognizant…

For 800 euros a month, in the example of Accenture as a subcontractor of Facebook in Portugal, people will be trained for 3 weeks on the way they should perform the tasks, before starting their job at “cleaning the web”. They will be exposed for the whole day to images of rare violence, pornography, hate speech, that they will have to classify not according to their perception. To the journalist infiltrating the job, the trainer would point out “your problem is that you are reasoning according to your point of view, when you should only apply the rules of Facebook”, which is presented as fair as Facebook is the company, and it would only be logical for them to apply their point of view.

The content is classified with labels such as “Delete”, “Mark as Cruel”...

Several former contractors suffer from PTSD after being exposed to these images, making less than 4 dollars an hour.


Journal: Le 1, Les nouveaux prolétaires du web, 25 Septembre 2019.

Cash Investigation, Mon patron est un algorithme, September 2019.

Things to look further into:

The work of Sarah Roberts, UCLA, on content reviewers.

French deputy Barbara Gomez.

Jeanine Berg and the report on the web workers for the ILO.

Lilly Irani, a Researcher at the University of San Diego, specialises in the work culture in the sector of technologies.

  • Photo du rédacteurchahaleden

Dernière mise à jour : 7 janv. 2022

After putting in place the installation for the first WIP show, I faced limitations in the way I had conducted the work. Even if overall I don’t think, after engaging with the subject, that it has the potential to be developed to an advanced stage, I would like to go to the end of a logic and reproduce the installation in the light of the first experience.

I will do it almost from scratch and dedicate not the second WIP show but the 3rd Artefact to a more finalised version.

I still have technical issues that remain, they are essentially about establishing a functioning surround output directly from Unity to the space of G05, and managing the image-caption with NeuralTalk2.

What I would like to expose in this post is the protocol I tend to follow in light of the experiments of the first field recordings.

The central concerns are:

The quality of the sound, which will be resolved by switching to another equipment, and using different settings, more adapted to the speakers of G05.

The original idea of having a discrete camera, using a GoPro to film that would record the situation in movement as in the reference of artist Kyle MacDonald in the streets of Amsterdam for the video NeuralWalks, did not prove relevant for this installation. The reason is that the changes are not striking enough when the camera is placed for a few minutes on a spot, it also does not reproduce the experience of Perec sitting in one spot and describing what he sees, as he is constantly changing his point of view, across the plaza, by choosing to focus on certain points, and moving around. This was one of the limits he found to his human experience too, the one related to the fact that he is supposedly describing the ordinary without narration, but chooses to point his eyes in a direction. To replace the video I will be working with still images.

I have hesitation on the way I will target them if it will be by aiming at spots he has focused on and that can be identified today (eg, “children are playing under the church pillars”, this could be one target). Or if I will lead the observations from the points of sound.

One of the central issues I had during the WIP was that the book was not present in the piece. I would have to explain it was inspired by it, but even then, it is difficult to relate to the original text. I was also doubtful about the fact that I wrote myself the descriptions to be navigated, and even more by their disposition, as even if they weren’t maps, they still represented spatial areas.

This also caused a navigation problem with the custom gamepad. Except for this, the gamepad worked well and instinctively for the users who tested it, except for the fact that when in an area, they would tend to push the button and expect a reaction.

The new plan of action will tackle all these questions.

I have extracted from the text all the sentences with indications of where Perec is standing to observe the square, as well as indications on his gaze. They can be found in the following collage.

Instead of partitioning the square in spatial sequences, I will record on these same spots. You don’t navigate the space, but the positions in space of the narrator of the book. Which is solving the question of the map as well as the one of writing sentences that could appear random, and bringing fragments of the book to the visitors.

I have also isolated the indications of point of focus and will have to decide if they are the ones that will lead the pictures to be fed to the AI.

I will be taking samples at those spots, for 15-30 seconds (the original one was longer and irrelevant). I will keep the partition by days and hours. When entering the hour, you will navigate the different spots of the square to isolate the sounds, and the AI’s description will appear on the side walls.

You will be able to create your soundscape by selecting several sounds at the same time, they will spatialise as you go. This makes you also an actor of the soundscapes and tends towards the initial intuition of selecting the options with the push button.

  • Photo du rédacteurchahaleden

Dernière mise à jour : 4 avr. 2022

Reading notes on:

Psarra, Sophia. Venice Variations: Tracing the Architectural Imagination. UCL Press, 2018.

Introduction: Between authored Architecture and non authored city

In the chapter Between authored architecture and the non-authored city, Architect and Professor Sophia Psarra introduces the thesis and method developed through her book “Venice Variations: Tracing the Architectural Imagination” (UCL Press, 2018).

The book unfolds around the study of three artefacts with Venice as a common thread: Venice as a city, the place it holds in the work of fiction The Invisible Cities by Italo Calvino, and an architectural project by Le Corbusier for the Venice hospital.

To distinguish the other cities’ qualities, I must speak of a first city that remains implicit. For me it is Venice.

Italo Calvino The Invisible Cities (1972)

For Pr.Psarra, cities, buildings, and books are all the result of both collective and individual efforts. Venice embodies this better than any other city, by its capacity to contain a multiplicity of visions and systems of reality, that provoke imaginative engagement. This idea of a city that is more likely to induce a versatile, rich set of perceptions cohabiting in the same geography becomes more striking when read in relation to the subject of the research I am undertaking. The exemple for a city that would be close to what immersive technologies are aiming at inducing, probably in a less subtle way, a more directive one, is not a “smart city” or one that seems to be turned towards a certain vision of the future, but rather one that has been, like the author points, in economical and political decline since the 15th century. Nevertheless, she sees it as an emergent system, with an outcome of a highly probabilistic algorithm. A structure that with a small number of rules, is capable of producing a large number of variations. That way of describing the structure of venice is very close to the description of object oriented programming. Beyond programming, the author draws a parallel in the transformation brought by computer aided programs and the way it is transforming the practice of architecture and the invention of architectural notation in the 15th century at Venice’s heyday. This brought major cultural shifts and modified the way of practising architecture, between the practice of design and the craft of building, to the emergence of architectural design distinct from artisanal building traditions.

Venice in the lagoon. Drawing by author Sophia Psarra, GIS data by Universita IUAV di Venezia - laboratorio di cartographia e GIS

bottom of page