Justin Joque is a visualization librarian at the University of Michigan and the author of the book Revolutionary Mathematics: Artificial Intelligence, Statistics, and the Logic of Capitalism. His book examines the statistical models on which our algorithms, machine learning, and financial systems are built, highlighting the mechanisms of abstraction which lend these models an air of misleading objectivity. Can statistical models be used towards emancipatory aims?
Hi, I’m Talia Baroncelli, and you’re watching theAnalysis.news. Joining me in a bit is Justin Joque, the author of Revolutionary Mathematics. We’ll be speaking about algorithms and the statistical models on which financial systems are based and other models of digital control.
First, before we get started, it would be great if you could go to our website, theAnalysis.news, and get on our mailing list. That way, you’re updated every time there’s a new episode. You can also donate to the show by hitting the red button at the top right corner of the screen. Also, like and subscribe to our YouTube channel, theAnalysis-news, or whatever other podcast service you’re using. Stay with us, and we’ll be back in a bit with Justin Joque.
Joining me now is Justin Joque. He’s the author of Revolutionary Mathematics: Artificial Intelligence, Statistics, and the Logic of Capitalism. He’s also a visualization librarian at the University of Michigan. Thanks so much for joining me, Justin.
Thanks for having me. I’m excited to be here.
It just so happens that it was an algorithm which took me to your book and a recommendation algorithm. It’s pretty fitting that we’re speaking about algorithms and statistical models today. Your book essentially looks at different statistical models and argues that there’s a metaphysical or philosophical assumption in a lot of these models that they are essentially trying to maximize a certain payoff or reward. This actually lends itself to capitalist accumulation and to the financialization of the markets that we’ve been seeing over the past few decades. Why don’t we start with what actually compelled you to write this book in the first place?
Sure. The thing that compelled me was there was a growing literature about algorithms and especially algorithmic bias. Safiya Nobles’ book, Algorithms of Oppression and Virginia Eubanks has also written about this. I found a lot of this work really compelling and super interesting. It struck me that one missing piece was how these algorithms get their power, how the decisions are made, and then the larger questions of the political economy in which they function—also, thinking about the ways that algorithms pick up on what capital has already been doing.
There are all these systems for sorting workers, sorting people, and treating them in biased ways in terms of their access to capital, their access to jobs, and their access to housing. Many of these books and arguments about algorithmic bias showed how algorithms were repeating earlier forms of bias. I thought that there needed to be an extra step to think about the ways in which we couldn’t just fix the problem at the level of the algorithm, that it also was part of this much larger social problem.
Yeah, because we can observe different instances of that in the world that we live in. For example, populations being controlled by digital systems, such as workers, immigrants, and even inmates who have been newly released, they’re still controlled by these digital systems.
There’s one instance in your book, which you describe, which is great. Well, quite frustrating if you’re in that position. But think of yourself as being at an airport and going to the gate of an airport and essentially asking the worker there to help you change your flight. Then they say, “Sorry, I can’t override the system. I can’t do anything for you.” That’s an instance in which we’ve entrusted so much power to these computer systems.
Right, yeah. One of the really important parts about it is that these systems are making these decisions, but then there are these much larger apparatuses of juridical power that are being used to enforce the decisions. Also, all of the mechanisms of auditing them or protesting decisions are now being stripped away.
One of the examples that happened around the time that I started thinking about the book, which was really informative for me, was this MiDAS, the Michigan Data Automation System. Part of what it did was try to detect, automatically detect, unemployment fraud. People who lost their job would get unemployment insurance, and if people were scamming the system, it was supposed to cut them off. They spent a couple of million dollars on it. It turned out this software didn’t work at all. It was tens of thousands of people who lost their unemployment insurance because the system said that they were committing fraud. They had gotten rid of all of the people who were making these decisions manually and all the people who could audit it. A lot of these people took five or six years in order to get the money back. They had to go through the court system. If you’re on unemployment insurance, you can’t wait five to six years for the money that you need.
I think it’s a really interesting example because it’s not even a question of the algorithm being biased or anything like that. It’s just completely, it was just flat out wrong, and the implications were horrendous for the people who were affected. Thinking both about the algorithms, but then also all of the political and economic power behind those decisions and the ways in which they affect people’s lives.
Well, as someone like me who only has a cursory understanding of statistics, I think your book really helped me understand the different models that are being used. You speak about frequentist statistical models, as well as Bayesian statistical models. You essentially argue that there really isn’t a thing that is probability. Aside from the complexities of quantum mechanics, something could be both a 30% probability of happening and 60% at the same time. Outside of that realm, that’s not really possible. You argue that probability is not really a thing. We set these parameters based on what knowledge we have and what we’ve observed, and we input that into these models. What it gives back to us is not, I don’t know, a scientific truth. It is something that’s interpreted within the context of these models. Maybe you can speak about frequentist versus Bayesian statistical models and the differences between the two.
Yeah. That’s a central component of the book. It really does start with this insight that is in some ways counterintuitive but also very intuitive once you start thinking about it. It’s exactly like you said. In the real material world, there’s no such thing as probability. You flip a coin, and it lands on heads or tails— 50% land on heads or 50% land on tails. Probability really is, I think, definitionally a metaphysical category. It’s an intellectual supplement to the world we’ve created, partly because of a lack of knowledge. It’s because you flip a coin, and you can probably model physically the air currents and the forces and figure out how it will land. Normally when you flip a coin, you don’t know, and so the probability is this additional supplement that we add to the world in order to deal with situations that we don’t know.
In the book, I traced this shift from frequentist statistics to Bayesian statistics. There are a lot of important components to it, but let’s really try to boil it down to its essence. Frequentist statistics were very popular at the beginning of the 20th century up until about the ’70s or ’80s. Still, this is what’s taught in an introduction to statistics class. The way that probability is given a certain objectivity is called frequentist statistics. Frequentist statistics refers to itself as an objective theory of probability.
The idea is that you take a long run of something, and then you measure the frequency. So you can’t, under a really rigorous frequentist framework, you can’t assign a probability to an individual coin flip. You have to have a series of, let’s say, 100, 1,000, 10,000 coin flips, and if half of them are heads and half of them are tails, then the probability is 50% heads, 50% tails. It’s objective in the sense that it’s a measurement of our frequency, but it still, in a certain sense, has a lot of subjectivity to it because you have to define what’s called the reference class. You have to say, it’s only when I’m flipping a coin on these days that I’m considering this specific coin, or this specific model, or weather predictions. You still have to define the experiment and what counts as an instance.
Conversely, Bayesian statistics, which had become more popular in the ’80s, although it really traces back to the 19th century, is a subjective theory of probabilities. Instead of having it be based on this long-run frequency, it’s really based on my individual knowledge of the situation. The mathematics of Bayesian statistics allows you to have your guess. I think that this is a fair coin. I think it’s a 50% chance of heads. Then as I observe more flips, there’s a mathematical formula called Bayes’ theorem that allows me to update. If I see 10 heads in a row, I might think, okay, maybe this coin is actually biased towards heads, and then you use Bayes’ theorem to update your probability. What it allows you to do is start with a subjective theory of probability. As you observe more and more instances or get more and more data, then the idea is that your probability should become more objective because it’s based less on your prior ideas of what things were and more on the data that you observed.
I’m sure we’ll talk more about this, but I think one of the important things to note is that even though it’s subjective, it’s actually much more easy to automate than frequentist probability because frequentist probability requires setting up an experiment, and you can’t assign a probability to an individual event. Let’s say you go to a website and it wants to give you an ad; there is no frequentist way that you could say, okay, I think there’s a 2% chance you’re going to click on this ad. Whereas because Bayesian statistics are subjective, you can assign a probability to an individual event when you’re going to click on an ad, and you can automate the mathematics of getting more data. It’s subjective in the sense that it’s from a specific epistemic point based on the data that’s observed, but it’s not necessarily subjective in the sense that it’s like a person. It has to be a person. The math can be done quite rigorously so that it can be automated.
Well, not to get too into the details of Bayesian statistical models, but I think it was founded by a priest or a theologian, Thomas Bayes. I think it is maybe halfway through your book that you’re speaking about Bayes and how he obviously believed in God. There’s this subjective element, but it still has a notion of this, I guess, objective truth, which is a deity as being the thing which determines everything. You explain how this deity can then be replaced by the market or the truth of the market. Could you explain how a lot of the models that we have: the trading of derivatives, different financialization models, and very, very fast automated ways of trading are based on this assumption that the market is the absolute truth, which is allocating things in a very specific way, in a way which is rational and objective.
Sure. There are a number of components at play there, but absolutely. Bayesian statistics was invented by Reverend Thomas Bayes in the 19th century. One of the problems that you run into as soon as you accept a subjective theory of probability is that it’s really difficult to evaluate whether you’re right or wrong. If I say there’s a 60% chance of it raining tomorrow and it doesn’t rain, you can’t say, oh, no, you were wrong. You said there was a 60% chance of rain because the 40% chance that there’s no rain covers that. As long as someone doesn’t say there’s a 100% chance or a 0% chance, it’s really not an individual event. An individual probability is not falsifiable. There are different ways that you can arrive at the probability calculus and some of the rules of probability. For example, an event and not having one event should add up to one. I said there’s a 60% chance of it raining, then there should be a 40% chance of it not raining tomorrow.
Bayes’ theorem that Thomas Bayes discovered is based on this regularity and this ability to apply the probability calculus to events. For him, he says, “Look, because there’s this regularity and it follows these mathematical laws, for me, that’s actually proof that God exists because there’s this regularity to the universe.”
In the 20th century, such arguments tended not to hold as much scientific water as they did in the 19th century. There are a variety of ways that let people get to the probability calculus. One of the enduring ways that people keep coming back to is what’s called the Dutch book argument. Basically, a Dutch book is a gambling situation where you wager a certain amount of money on different bets, and no matter what happens, you lose money. There are three horses in a race, and you bet maybe on two of them or on all three of them in various proportions. If you convert your probabilities to betting contracts, if you think that a coin is two-thirds likely to land on heads and one-third on tails, then you would bet two-to-one odds on a betting contract. You can derive the entire probability calculus out of the desire not to have a Dutch book made against you. If you think that there’s a 60% chance of rain and then a 70% chance of it not raining, you can convert those to contracts and show how if it rains or doesn’t rain, you would still end up losing money.
In this deeply metaphysical way, I think you see with the Dutch book argument that the entire edifice of probability and statistics is founded on this idea of the market and on the exchange of contracts. I think there’s this internal metaphysical relationship between statistics and the market or, at least, political economy.
Also, what I argue in the book is that there’s an external one as well, that these systems are constantly making decisions in a world where they are developed by corporations and the government. They’re constantly tied back into ways to make money, which becomes the important way that truth is determined.
Two of the examples that I talk about in the book are Volkswagen’s diesel scandal, where they had cars that would detect if they were being tested by the environmental protection agency or other similar agencies, and then they would change how they function based on if they were being tested. The other one is Project Greyball, which was a project by Uber to detect if someone opening the app was a governmental regulator, and then they would try to hide the rides from them. I think you can see in these examples that these modes of producing knowledge, both internally in terms of the Dutch book, but then externally, are embedded in these systems of political economy and are optimized not for some transcendent idea of truth but for the ability to produce the most value for the owners and creators of these algorithms.
Well, I definitely want to speak about the epistemological foundations of these models, epistemology being the theory of knowledge and how we come to know something and what knowledge is produced. But first, it’s worthwhile to explain how your book is so deeply based on Karl Marx’s Capital and his conception of objectification.
Marx says that social relations are essentially negotiated through objects, and there’s this false consciousness which develops because you tend to treat these relations as being immutable or fixed or as something that’s natural in a sense. You use this interpretation of objectification, whereas some other people focus on objectification as being the dehumanization of the worker. I really like the way you used or deployed objectification in your book. Why don’t we speak about how you developed your analysis using Marx’s objectification?
Yeah. What I find really interesting about this theory of objectification, especially as it’s developed in the first volume of Capital, is that what I read Marx talking about is essentially the way that we let objects think for us and remember us. I think he says, manage our affairs behind our backs. You think you go and you buy a cup of coffee, and you’re at the store, and the sign says it’s two dollars or three dollars or whatever for a cup of coffee; it becomes this objective fact that you have to respond to. It’s not something where you can say, “Oh, no. For me, it’s only worth $1.50,” or something like that. We have this implicit idea that the three dollars are somehow inherent in the coffee itself.
In a lot of ways, I think what Marx is talking about in terms of objectification is almost proto-algorithmic, I guess we could say. At one point in the book, I said that capitalism is the original machine learning. It’s this way, this complex social network that tracks prices, value, and labor and prevents it when we go to the stores, the price of bread, or pay rent to a landlord or something like that. I’m really interested in the ways in which probability does something similar that it becomes this objectified thing and algorithms that keep track of our affairs for us.
You can think of credit scores as a thing that does this. Even though it’s this totally made-up thing, the idea that you should get a credit card or mortgage and improve your credit score, it becomes this objective game that we get caught in. It’s metaphysical, it’s not a real thing, but we know that certain actions will improve our credit score, which increases our access to capital or to credit at least, and other things will hurt our credit score. You can say, okay, this is ridiculous. I don’t want to play this game. But if you want to get a mortgage, car loan, or something like that, you get caught in the objective condition of credit scores. I think there’s something really interesting about the ways in which these algorithms and capital think for us.
One argument that I try to make in the book that I think is really important is that the argument isn’t that Bayesian statistics are these bad bourgeois statistics and that we need to come up with a new system of doing statistics. In a lot of ways the Bayesian revolution, Bayesian discovery is an incredibly important one because it shows the ways in which all of this knowledge production is this objectifying force and one that’s tied to political economy. There’s no way we can go back to this enlightenment idea of transcendent truth or something.
Well, you also speak about Moishe Postone, who is a historical Marxist, and he speaks about abstract domination and real abstraction. Essentially, these models take observed phenomena or certain assumptions and feed them into a model, and then you have your output. But I’m wondering, someone who’s not a statistician myself, do these models really reflect the materiality of the world? Or is there any way to make them reflect the world in a more; I don’t want to say accurate way because truth is something that’s always mediated, but is there a way to make these models more reflective of material conditions and inequalities?
Yeah, that’s a really good question, and it’s a complicated one in a certain sense. I wasn’t saying that these are bad, inaccurate descriptions of the world. I think they actually are incredibly powerful tools that do describe the real material conditions, but precisely in a way that I think you can get from Postone or people like [inaudible 00:22:51], that we live in this world of these real abstractions.
Credit scores and values, are these fair? Even though they’re not material things per se. Marx says there are no advances in chemistry that will ever allow us to find the value in cloth or something like that. But that doesn’t mean that cloth, coffee, or a house doesn’t have value in a real sense. It’s not just a fiction that we can wish away or that there are some advances in either economic, bourgeois economics or Marxism that will ever figure out the real value of a cup of coffee or something like that. These are socially produced. They’re abstractions, but they are very real in a certain materialist sense.
The problem, and I think this is what you were getting at, is that they reflect a world that’s fundamentally biased, unjust, and exploitative. I think it’s a very difficult thing, and I think I would have been more concrete in the end about the path forward if I knew exactly what the path forward was. We have to address these things on both ends of the spectrum. We have to address the concrete outcomes, the injustices that these systems produce, and the ways in which they’re racist, sexist, and xenophobic. But then we also have to attack and reconceptualize the very metaphysical value relations that allow these systems to produce and reproduce. It’s a question of both discussing and dealing with the outcomes of things, but then also trying to reimagine the entire system of political economy and how we distribute goods, organized labor, and the social production of these real abstractions.
Well, why don’t we stay on this for a bit? There’s something in your book which was very interesting. You’re saying that you’re not trying to de-objectify these statistical models. You’re not saying that we should do away with technology, whereas some people want some return to not having any technology because all technology is bad, according to their view. But I wonder to what extent that is actually possible. If these statistical models have a certain way of abstraction and potentially misrepresenting certain things in the world, how is it that we can still use them for emancipatory or equitable ends? Is it just changing what the aim is? Instead of saying, okay, we’re using this model to see what the payout is to ensure that there’s a certain amount of profit maximization, but if the aim is then some redistribution of wealth or something that’s equitable, can you plug that aim or change the aim of these models? Or are they going to have, I don’t want to say, a mind of their own because I don’t think artificial intelligence is really intelligent. It doesn’t have a psyche in them; it’s based on what we put into these models. Do you think it’s actually possible for these sorts of models, which are based on abstraction, to actually be of service to an emancipatory project?
Yeah. There are two levels to which it’s important to answer that question. The first is, at some point, it’s impossible to step outside of abstractions and these forms of objectification that were always involved in the world through various abstractions. We’re never actually dealing with things, whatever, the soil under our feet. It’s always functioning as land or as something potentially productive or some form of knowledge. So that even outside of capitalism, I think that there are always forms of abstraction, forms of objectification. It’s a question not of objectification being in and of itself but precisely what it is that’s objectified and the types of abstractions, and what reality or real abstractions they end up producing.
Specifically, with that in mind, specifically the question of technology, for me, I fall in the middle. I don’t see myself as a technological determinist. I don’t think that the outcome of what these tools and systems do is fully predetermined. I’m also not on the end of the spectrum where I think, okay, it’s just a tool, and it’s totally up to you how to use it. I think it’s somewhere in between. There’s a lot of freedom in how we use these tools. They can be put to different ends, but they do smuggle with them certain metaphysical understandings of the world and ways of operating. We can’t say, oh, I’m going to take an AI and do something totally different and be completely removed from the market because I think we’re stuck in these metaphysical systems.
I think they can be put towards emancipatory projects, but we can try to use them, think of other ways for them to think about evaluating the outcomes, especially if we think about the implications in terms of climate or something like that. Those models can be used to calculate those things. It’s always difficult because both internally, they have these market-driven metaphysics, but then also, in a lot of the emancipatory projects, they exist within a world that’s predominantly capitalist, and there’s a tendency for capitalism to subsume these things and continually eat them up and reconfigure them in ways that continue to build these systems of domination and exploitation.
That’s maybe the long answer, but the short answer is yes, I think they can be used for emancipatory projects. There is no guarantee that any project, just because of our will that it should be emancipatory, is going to succeed in being emancipatory.
There’s, of course, always the issue of what Mark Fisher terms capitalist realism. The way that we view things is always limited by the capitalist confines in which we live. So it’s really hard to think beyond those parameters. But I also wanted to speak about something that’s been in the news recently, and this idea of the Google search engine always giving you the best results; I think that’s been problematized recently, where sometimes the search results that you get are random or not accurate. Some people argue that that’s a result of monopolization. Google has monopolized that sector of the market, so there’s no competition, and so they don’t need to optimize or improve their product. But aside from that and looking at the big picture, is it possible that some of these AI machine-learning systems are just generating nonsense? What is the knowledge that they are producing? Can we term it as knowledge, or is this something else that is clouding our intellect?
Yeah. I think that there is a way in which they’re producing knowledge. They’re producing a certain gloss on what exists out there, this socially, to some extent, maybe there’s a certain analogy, and I have to think through this more, but it’s knowledge, maybe a version of socially necessary labor time. It’s like average socially-produced knowledge is being smashed together and produced. I think that the risk is that right now, it’s slurped up all this data based on a lot of things that people have written and artists have drawn and created. The problem is that it’s churning out so much tech, so many images that are then going to get sucked back up into the future versions of these.
There have been some studies recently where they trained AI, these large language models, on the output of earlier iterations of large language models, and they go completely off on their own and become increasingly incomprehensible. I think that there is a distinct risk that rather than these systems getting better, we’re going to actually see them get worse in precisely the same way that I think that Google search has gone downhill significantly. They’re losing the battle to search engine optimization and people gaining the systems. We’ll see AIs producing content and then sucking that back-up and producing weirder and weirder content and going off on their own.
There was one paper I was reading, and this was a few years ago. They had two AIs that were talking to each other, and they eventually created their own language that was completely incomprehensible to people. I think that maybe that’s the direction that this will go, rather than this fantasy of it becoming more and more accurate or anything like that.
Well, if it’s possible that these AI models will deteriorate, what will be the role of data pools potentially getting poisoned? I was recently reading an article about China’s AI and how they’re worried, whether this worry is warranted or not, but they’re worried about some of their systems being hacked and certain data being inputted into their data sets, which would then, I guess, skew the results or make these models not work properly. I wonder if that would be a problem that could be a long-run issue and that could impact a lot of different models. What would the outcome of that be, then?
Yeah. I think that it is likely to become an increasingly bigger problem. I think it gets back to the ways in which a lot of these algorithms are designed basically to produce the most value rather than some other measure of knowledge. It becomes this arms race—you can think about it too, in terms of global competition, that it becomes easier to poison your adversary’s pool of data than to improve your own pool of data. I think there are a lot of ways in which people speak about crises of truth and crises of science. I think, at some fundamental level, they’re primarily crises of capitalism. Capitalism has become informational capitalism on the mass extraction of knowledge. I think that the ways in which knowledge production is becoming incentivized that even aside from malicious at first, there’s going to be this tendency towards these data pools becoming worse and worse and lower and lower quality to the point where these systems, these algorithmic systems of socially averaging knowledge will, as you said, just become increasingly worthless and unusable.
I think maybe there’s a risk that capitalism is maybe… maybe this is a bad analogy, and I should think through it more. Even with that said, in the same way that capitalism has destroyed the physical environment, I think that we’re entering a point where capitalism is risking and destroying the informational environment in which we live, and both poisoning it, but also what we might call data pollution or something like that, just producing these constant streams of increasingly lower-quality data.
So it essentially creates its own raison d’etre as well as its demise.
Yes, exactly. That’s a very good way to put it.
I was actually wondering why you focus so heavily on Marx. I’m totally partial to his analysis, but what was it in Capital that made you think I can tie this to what’s going on with statistical models, financialization, political economy, etc?
That’s a good question. I think I’ve always been drawn to a certain Marxist analysis, but I do think it’s really this idea of objectification, which is one that I find incredibly persuasive—thinking about the ways in which we let systems dictate for us because that’s really what it seems to me that we’re doing with algorithms. We’re saying, we’ll write this code, it works, and then we’ll set it aside, and we’ll let it manage our affairs. I think that’s really what drew me to this Marxist analysis. Then there’s also this, in my mind, this analogy between the ways in which value functions is this metaphysical supplement. We had these systems of producing and comprehending value that seems analogous to the way in which probability works, but it’s not really something that actually exists in the world, but it’s an incredibly powerful tool that people use to understand the world with all sorts of consequences.
There’s another thing you address in the book, which is this idea of the revolutionary or political subject. Marx speaks about this subject, this a-historical, trans-historical subject, which is about to bring about the revolution, essentially. You give various examples as to why this concept doesn’t always hold water. It’s largely based around a Western, potentially male subject and doesn’t account for various forms of colonization or oppression that have taken place around the world where that political subject might be different and might not be determined by the same historical forces that Marx speaks about. How do we get around that problem? Who is the subject in potentially using these models for something that’s more or less malevolent to something that’s better, essentially? Is there no such a thing? Should we do away with that concept altogether of the revolutionary subject, or is there another way of understanding it?
I think, in a lot of ways, I very much appreciate Moishe Postone’s analysis when he says that the subject of capitalism is neither the capitalist nor the worker, but it’s actually capital that is the subject. It’s what makes these decisions for us. Even though we feel like we get to choose these things in a lot of ways, it is these larger structures that make decisions for us. It’s exactly like what we’re talking about. You can do whatever you want, but at the end of the day, you have to work in order to make money to pay rent, to buy bread at the market and stuff like that. I’m not entirely opposed to the idea of some revolutionary subject.
I think what interests me much more is thinking about the ways that we can actually change the objective conditions under which we find ourselves so that these changes come about objectively, not in a traditional understanding of objectivity, but in this Marxist understanding. It becomes less voluntaristic or willful; it’s a sovereign decision to change the world, but this tinkering and playing with the social rules under which we exist creates more emancipatory potential.
I think that’s probably a good description of how I think about these things. I find that sovereign subject to be a very difficult and tricky thing for all the reasons that you mentioned. I guess the hope of the book is maybe that there’s this other area that we can work on these problems that don’t involve reimagining a traditional political subject.
Well, Justin Joque, it was really great speaking to you about your book, Revolutionary Mathematics. I thought it provided a really great overview of the history of how these statistical models have been developed and how these models are currently being used in the service of capital, and what might be done about it, even though obviously there’s no one size fits all solution to what can be done. I think you really illustrate how these models are often working against us and controlling us to a certain degree. I think just being cognizant of that fact is important in and of itself.
Thanks so much, Talia. This has been a really fun conversation.
Thank you for watching theAnalysis.news. If you haven’t done so already, please go to our website, theAnalysis.news, consider donating to the show by hitting the button at the top right corner of the screen. Also, like and subscribe to our YouTube channel, theAnalysis-news, and subscribe to whatever podcast you’re using to watch this show. Please do visit us again, and I hope you enjoyed this content. See you next time.
Never miss another story
Subscribe to theAnalysis.news – Newsletter
Justin Joque researches philosophy, technology and media and is the visualization librarian at the University of Michigan.