Eric Schmidt the Bed in Stanford Interview.

How Oligarchs Speak (When They Think No One Is Listening).

A photo from a defense innovation board meeting with an arrow pointing to Eric Schmidt. Image Description: A photo from a defense innovation board meeting with an arrow pointing to Eric Schmidt.

Summary: Have you ever wondered what oligarchs talk about when they think no one is listening? Wonder no more. This week we dissect Eric Schmidt’s recent Q&A at Stanford University where he talks about remote work culture, the future of AI and how big tech intervenes at the highest levels of government. The kicker is…he thought it was private and didn’t realize it was being recorded. Some of his comments regarding the laziness of remote workers and the “arrogance” of the programmer community went viral prompting Schmidt to apologize and scramble to have the video removed. Considering he ran the world’s biggest internet search engine he should have known that nothing ever really disappears. In fact, we dug up his own words from 2013 when he said exactly that. Beyond the comments that got the most attention in the media (for a minute) the conversation reveals much more about the dark side of oligarchy in the United States.

“Why would a good liberal like me do that?”

This was an offhand comment made by Eric Schmidt, Silicon Valley billionaire investor and former CEO of Google during an interview with Stanford professor and fellow Erik Brynjolfsson in front of a group of students at Stanford. To me it was the funniest line of the entire interview because of the context in which it was delivered. Before we get there, let’s back up because we have a lot of ground to cover.


This is what Unf*ckers have been training for over the past four years. You’ve done the work. Your ears are trained for this. We’re building on lessons and themes from our Chicago School episodes, Schumpeter’s theory of Creative Destruction, biographical episodes on Peter Thiel and Elon Musk and, mostly recently, our interview with Yanis Varoufakis covering his book Technofeudalism.

Now, you might have missed the controversy surrounding some of Schmidt’s remarks at Stanford, in part because it happened in the middle of August and partly because Schmidt petitioned Stanford to remove the video. Considering Schmidt ran Google and YouTube, he should know that nothing in this world is ever deleted. In fact, he even said so in a 2013 book titled The New Digital Age saying:

“The possibility that one’s personal content will be published and become known one day—either by mistake or through criminal interference—will always exist. People will be held responsible for their virtual associations, past and present.”

He co-authored the book with Jared Cohen, a former advisor to Condoleezza Rice and Hillary Clinton who went on to work alongside Schmidt at Google and now hangs his hat at Goldman Sucks. And I’m really glad I held onto my copy of it.

The subtitle of The New Digital Age is Reshaping the Future of People, Nations and Business, which should give you some indication of the level of self importance Schmidt ascribes to his station in life. Schmidt is the prototypical master of the universe type who feels no sense of conflict calling himself a good liberal and committing the sins of the ruling elite. It’s the same lack of awareness that comes from running a company whose motto was, “Don’t be evil,” while doing a ton of evil shit.

The version of Schmidt that co-authored The New Digital Age is a toned down version of the mask off version laid bare in the Stanford interview. In the book he and Cohen come off as dispassionate observers of the tech revolution and the influence it has on all aspects of life. They predict a world where technology brings unprecedented connectivity, reshapes governments, and affects global power dynamics. They discuss both the positive impacts—such as democratization and economic growth—and the challenges, including cyber security threats, privacy concerns, global terror networks and state censorship. They blithely quote conversations with figures as diverse as Julian Assange to Henry Kissinger.

(Kissinger is even referenced in the Stanford interview, but strangely his name isn’t prefaced with the title “war criminal.”)

Schmidt and Cohen conclude their exercise into reshaping the world as we know it with a superficial statement that encapsulates the indifference with which they clearly view the world.

“We cannot eliminate inequality or abuse of power, but through technological inclusion we can help transfer power into the hands of individual people and trust that they will take it from there. It won’t be easy, but it will be worth it.”

The reality is that these guys aren’t trying to curb inequality or even understand the humanity-shredding effects of it. In fact, the book mentions inequality exactly once in the opening paragraph of the first chapter before barrelling through a litany of examples of how technology will erode our privacy, build rogue terror states, change the nature of warfare and improve reconstruction after the devastation resulting from technological warfare. There’s a passing mention of economic inequality in places like South America and Africa midway through the book and that’s about it. The entire work talks about the potential upside of technology while warning of the catastrophic downsides, never once stopping to ruminate on their role in both.

Schmidt’s most prominent role in business was first as CEO, then Executive Chairman and ultimately technical advisor to Google and Alphabet. He was enormously successful during this 19-year relationship with the organization that made him a multi-billionaire. Since then he’s gone on to chair committees and foundations, write a couple of books and fund a number of startups. Today he’s considered one of the most influential people in the world of AI and has amassed a personal fortune estimated to be in excess of $20 billion.

There’s the backstory on Schmidt to set the table for this listen-and-learn episode where we get a chance to eavesdrop on the oligarchy. This is what their conversations sound like when they think we aren’t paying attention. There have been a range of reactions to this interview starting with “who gives a shit.”

For a generation of tech students and Silicon Valley enthusiasts it was probably invigorating. What got media attention, however, were these remarks on remote work and competitiveness, which is what led to the backlash along with an apology from Schmidt and a request to remove the video.

“Google decided that work life balance and going home early and working from home was more important than winning.”

But the strongest reaction is the one that you’ll probably have given all we’ve learned together about the inner workings of power and the callousness of the oligarchy. His critique of remote workers is probably the least offensive thing he says if you know what to listen for.

So without further ado, let’s follow along.


Students and would-be tech entrepreneurs gathered to hear the sage advice of billionaire Eric Schmidt in an unfiltered conversation about the future of AI. Because Schmidt forgot it was being filmed and somehow didn’t see the camera set up in the small lecture hall until it was pointed out to him, he really let loose. Schmidt doesn’t get the normal tech billionaire media treatment like the Zuckerbergs or Musks of the world. He was the hired gun at Google, brought in because Larry and Sergei were told they needed an adult at the helm. But Schmidt proved to be an effective corporate leader and during his time at Google and even since then, he’s joined the ranks of the oligarchic class in the United States and, as you’ll learn, has his hands in more than just AI startups as an angel investor these days.

Let’s start off by examining the opening parts of the interview where he talks about TikTok as it relates to coding and information.

Stealing Intellectual Property

ES: The government is in the process of trying to ban TikTok. We’ll see if that actually happens. If TikTok is banned, here’s what I propose each and every one of you do. Say to your LLM the following: ‘make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour if it’s not viral do something different along the same lines. That’s the command. Boom. Boom. Boom. Boom. Right? You understand how powerful that is if you can go from arbitrary language to arbitrary digital command, which is essentially what python in this scenario is. Imagine that each and every human on the planet has their own programmer that actually does what they want, as opposed to the programmers that work for me who don’t do what I ask. Right? The programmers here know what I’m talking about. So imagine a non-arrogant programmer that actually does what you want and you don’t have to pay all that money to, and there’s infinite supply of these programmers.

Interviewer: And this is all within the next year or two.

ES: Very soon.

And we’re off.

The media fixated on Schmidt’s suggestion to steal TikTok’s code and all of its users and assets. Understandable. The music licensing industry had thoughts, to be sure. Later on he kinda, sorta walks back part of this but in the process digs the hole even deeper. So we’ll get to that in a few. But there’s obviously a lot more at play here. First is the blatant disregard for intellectual property. I mean, think about what he’s suggesting here and how American companies would respond to programmers in another sovereign jurisdictions making similar commands and ripping off the core architecture of literally every tech platform out there. Pandemonium.

Also…possibly pretty cool if we consider the arguments made by Varoufakis in Technofeudalism. Central to his thesis in the book was that the tech companies created a walled garden that not only eliminated market competition, they foreclosed on access to a great many in the population. So in one sense Schmidt is entertaining the notion that within the next year or two, rogue—meaning non-corporate affiliated—networks of programmers will have the technical capability through AI to replicate any platform in the world and iterate upon it to continuously refine and improve upon it until it becomes something altogether different and presumably better.

As for the legality of this, again he gets into that later. But what he’s proposing theoretically undermines the entire capitalist system and tech infrastructure of the world. And he’s kind of celebrating it while talking shit about the “arrogant” programmers on whose backs his billions were accumulated.

And for what? For demanding living wages and pushing back on morally or ethically compromised code? The level of detachment and sheer irresponsibility contained within this clip alone is staggering. Because, remember, he’s talking to students at an elite institution who want to literally be him. So these are the guidelines he’s giving the next generation? Steal everything, apologize for nothing, and abuse your workforce?

Let’s move on.

AI Energy and Investment Needs

ES: So you asked about what else is going to happen. Every six months I oscillate—so we’re on a—it’s an even/odd oscillation—so at the moment the gap between the frontier models, which there are now only three—I’ll review who they are—and everybody else appears to me to be getting larger. Six months ago I was convinced that the gap was getting smaller, so I invested lots of money in the little companies. Now I’m not so sure, and I’m talking to the big companies and the big companies are telling me that they need $10 billion, $20 billion, $50 billion, $100 billion.

Interviewer: Stargate is what, $100 billion, right?

ES: Very, very hard. I talked—Sam Altman is a close friend—he believes that it’s going to take about $300 billion, maybe more. I pointed out to him that I’d done the calculation on the amount of energy required, and I then—in the spirit of full disclosure—went to the White House on Friday and told them that we need to become best friends with Canada, because Canada has really nice people, helped invent AI and lots of hydro power. Because we as a country do not have enough power to do this. The alternative is to have the Arabs fund it, and I like the Arabs personally, I spent lots of time there, right. But they’re not going to adhere to our national security rules. Whereas Canada and the U.S. are part of a triumvirate where we all agree.

Interviewer: So [with] these $100 billion, $300 billion data centers, electricity starts becoming the scarce resource.

“The Arabs.” How casual. ‘The Arabs have the money, but you know, they’re Arabs—and don’t get me wrong, some of my best friends are Arab—but they’re a little light on the whole national security thing. So we’ll go to Canada. They’ll do anything we ask them and we can even steal their hydro power.’, I’m just spitballing here but…$300 billion dollars into energy sucking server farms that increase the capacity of a completely unchecked technology that even the programmers don’t fully understand? What could possibly go wrong?

I guess what struck me here is the sheer amount of money the investor class has to risk on technology that, Schmidt admits shortly, is a clear investor bubble all for the sake of pushing a completely unproven technology on the world as it consumes more and more of our precious resources. All of these funds came from the public, from the free money government lending programs created under quantitative easing, corporate profiteering and technofeudalism-style gatekeeping that allowed the corporate class to hoard wealth that could otherwise be circulating through the broader economy.

This is the money they’re willing to risk, which means it’s still just a fraction of the accumulated wealth of the investor class. For a little context, $300 billion is the GDP of Finland. Merely pocket change they can afford to risk on technology with unproven ROI but potentially catastrophic consequences to intellectual property, privacy and the job market—not to mention the rogue states that can tap into this technology due to the lack of restrictions and guardrails—is the equivalent of national gross domestic products. This is where inequality has led and one of the problems with this is that we’re wholly incapable of crafting a regulatory framework for how this level of investment can be deployed so as to incur minimal harm to the people and the planet.

And here’s the nugget that got Schmidt in some hot water in the media.

Work-Life Balance

ES: Google decided that work life balance and going home early and working from home was more important than winning.

Like I said, this was the part that embarrassed Schmidt and grabbed all of the headlines. This is the Silicon Valley ethos that we celebrate in this country. The entire mantra here is cheat, steal, hustle and grind. Code your ass off to become a billionaire and apologize later for any harm done on the way.

While the rest of the civilized world is contemplating shorter work weeks, labor protections and regulatory frameworks to protect workers and consumers, we’re marching decidedly in the other direction. And the investor class is cheerleading every step of the way because they’re under the illusion that this is somehow in our best interests lest the Chinese catch up to us. It’s all so delusional but this is the mindset the free market neoliberals have cultivated. Rather than creating a system that distributes wealth and opportunity it’s all about the tech bro sigma culture of labor theft. Schmidt continues on this line of thinking…

Startup Culture

ES: The reason startups work is because the people work like hell— and I’m sorry to be so blunt—but the fact of the matter is, if you all leave the university and go found a company, you’re not going to let people work from home and only come in one day a week if you want to compete against the other startups.

Interviewer: In the early days of Google, Microsoft was like that.

ES: Exactly.

Interviewer: But now it seems to be—

ES: And there’s a long history of, in my industry, our industry I guess, of companies winning in a genuinely creative way and really dominating a space and not making this the next transition. It’s very well documented, and I think that the truth is, founders are special. The founders need to be in charge. The founders are difficult to work with, they push people hard. As much as we can dislike Elon’s personal behavior, look at what he gets out of people. I had dinner with him and…I was in Montana. He was flying that night at 10:00 p.m. to have a meeting at midnight with xAI. Right?

Interviewer: Midnight.

ES: Think about it. I was in Taiwan—different country, different culture—and they said that—and this is TSMC, who I’m very impressed with. And they have a rule that the starting PhDs coming out of the—they’re good, good physicists—work in the factory on the basement floor. Now can you imagine getting American physicists to do that? With PhDs? Highly unlikely, different work ethic. And the problem here, the reason I’m being so harsh about work is that these are systems which have network effects, so time matters a lot. And in most businesses time doesn’t matter that much, right? You have lots of time. You know, Coke and Pepsi will still be around and the fight between Coke and Pepsi will continue to go along and it’s all glacial, right? When I dealt with Telcos, the typical Telco deal would take 18 months to sign, right? There’s no reason to take 18 months to do anything. Get it done. We’re in a period of maximum growth, maximum gain.

Imagine busting your ass in higher education for a decade or more to achieve a PhD only to have some fucknugget like Schmidt tell you that you’ve earned a spot in the basement of some tech company highrise. Not to mention, he’s telling this to a group of Stanford students who are probably on their way to do just that. He’s basically shitting on them before they’ve even got started and telling them that he’s rooting for a world in which their hard work is meaningless unless they’re willing to commit the hours to stealing other people’s IP.

And for what? So they can join the elite cabal of jet setting douchebags like Elon Musk? You admire Elon Musk because he took a private jet to have a midnight dinner somewhere? Hopefully a picture is starting to emerge that nowhere in the oligarch thought process is the people who might be affected by their monstrous creations.

So I guess the reason we need to treat workers like caged animals and deplete energy sources throughout North America is because of the Chinese, right? Nope.

Competing with China

ES: We’re ahead, we need to stay ahead, and we need lots of money to do so. Our customers were the Senate and the House, and out of that came the CHIPS [and Science] Act and a lot of other stuff like that. A rough scenario is that if you assume the frontier models drive forward and a few of the open source models, it’s likely that a very small number of companies can play this game—countries, excuse me. What are those countries, or who are they? Countries with a lot of money and a lot of talent, strong educational systems and a willingness to win. The U.S. is one of them. China is another one. How many others are there?

Interviewer: Are there any others?

ES: I don’t know, maybe. But certainly in your lifetimes the battle between the U.S. and China for knowledge supremacy is going to be the big fight, right? So the U.S. government banned essentially the Nvidia chips—although they weren’t allowed to say that was what they were doing, but they actually did that—into China. They have about a 10 year chip advan—we have a roughly 10-year chip advantage in terms of sub DUV that is sub—

Interviewer: 10 years?

ES: Roughly 10 years.

Interviewer: Wow.

ES: So an example would be, today we’re a couple of years ahead of China, my guess is we’ll get a few more years ahead of China, and the Chinese are whopping mad about this. It’s like hugely upset about it.

Oh, so we’re a decade ahead of China? I guess that’s not enough to fight whatever fucking imaginary battle this is for AI supremacy that’s going to somehow democratize coding and potentially spread election misinformation, steal anyone’s identity and place military technology into the hands of rogue actors. That’s interesting but not the part I really want to point out.

Did you catch how casually he said that he and other tech billionaires petitioned Congress to build chip manufacturing in the United States, which led to the CHIPS and Science Act? Listen, the CHIPS Act was one of the crowning achievements of the Biden administration. It has already supercharged investments into the sector and it portends good things for our domestic manufacturing economy, and it also reduces the probability that we’ll ever be stuck with the supply chain snarls we experienced during the pandemic. That’s not the part I have a problem with. The problem is relational.

Who’s working for who here?

Our system of representation is so fully co-opted by the ruling elites that when they sit before Congress and make requests they get landmark legislation as a result. It’s a real window into how things get done. All that red tape, negotiation and compromise that members of Congress blame for why we can’t pass comprehensive welfare reforms or any other humanitarian measure suddenly disappears depending on who’s sitting in front of them. Now, we all know this to be true or at least we all suspected it, but this is Schmidt casually saying the quiet part out loud. Our elected officials work for the tech oligarchs.

Aside from the war over AI hegemony, are there any other battles being waged by the corporate class?

War in Ukraine

Interviewer: Well, let’s talk about a real war that’s going on. I know that something you’ve been very involved in is the Ukraine war and in particular—I don’t know how much you can talk about White Stork and your goal of having $500 drones destroy $5million tanks. How’s that changing warfare?

ES: So I worked for the Secretary of Defense for seven years and tried to change the way we run our military. I’m not a particularly big fan of the military, but it’s very expensive, and I wanted to see if I could be helpful. And I think, in my view, I largely failed. They gave me a medal, so they must give medals to failure or you know, whatever. But my self-criticism was, nothing has really changed. And the system in America is not going to lead to real innovation. So watching the Russians use tanks to destroy apartment buildings with little old ladies and kids just drove me crazy, so I decided to work on a company with your friend Sebastian Thrun—he’s a former faculty member here—and a whole bunch of Stanford people. And the idea basically is to do two things: Use AI in complicated, powerful ways for these essentially robotic wars, and the second one is to lower the cost of the robots. Now you sit there and you go, ‘why would a good liberal like me do that?’ And the answer is that the whole theory of armies is tanks, artilleries and mortar, and we can eliminate all of them.

He worked for the military even though he’s “a good liberal.” Because he wanted to help them “do war better.” And they didn’t listen because he thinks our military sucks. So he decided to start his own military endeavor on the side to show them how to wage war with robots. The old way of murdering civilians abroad is so costly and outdated.

The disconnect here would be laughable if it wasn’t so horrifying and delivered in such a nonchalant manner. We’ve heard this before from Peter Thiel and Palantir. From Erik Prince and Elon Musk. These guys talk about warfare and spycraft as though they’re in charge. And it increasingly seems like it’s because they fucking are.

We don’t understand what we’re building

Interviewer: So there was an article that you and Henry Kissinger and Dan Huttenlocher wrote last year about the nature of knowledge and how it’s evolving. I had a discussion the other night about this as well. So for most of history humans sort of had a mystical understanding of the universe, and then there’s the Scientific Revolution and the Enlightenment; and in your article you argue that now these models are becoming so complicated and difficult to understand that we don’t really know what’s going on in them. I’ll take a quote from Richard Feinman, he says, ‘what I cannot create I do not understand.’ I saw this quote the other day. But now people are creating things they do not—that they can create, but they don’t really understand what’s inside of them. Is the nature of knowledge changing in a way? Are we going to have to start just taking the word for these models about them able being able to explain it to us?

ES: The analogy I would offer is to teenagers. If you have a teenager you know that they’re human, but you can’t quite figure out what they’re thinking. But somehow we’ve managed in society to adapt to the presence of teenagers, right? And they eventually grow out of it. And this is serious. So it’s probably the case that we’re going to have knowledge systems that we cannot fully characterize, but we understand their boundaries, right? We understand the limits of what they can do, and that’s probably the best outcome we can get.

Interviewer: Do you think we’ll understand the limits?

ES: We’ll get pretty good at it.

But what if we don’t?

And, I’m sorry, but teenagers might be churlish and snarky, but I’ve dealt with them extensively and they’re pretty easy to figure out. It’s a horrible analogy designed to make him seem more human, but the lack of humanity in the construct of his thought process is incredibly troubling. The fact that he so thoughtlessly drops the very real fact that they do not fucking understand what they’re building but hope to someday come to understand it should send off warning bells throughout the halls of Congress to get some fucking regulations on the books and slow this all down a bit.

For the love of god, if we’re a decade ahead of the Chinese government then slow the fuck down and think about what we’re unleashing on the world. Maybe have an extended movie watching session of all the sci-fi movies that might have actually predicted what comes next.

Free Market: Forgiveness over Permission

ES: Well you have to assume that the current hallucination problems become less, right? As the technology gets better and so forth. I’m not suggesting it goes away. And then you also have to assume that there are tests for efficacy, so there has to be a way of knowing that the thing succeeded. So in the example that I gave of the TikTok competitor—and by the way I was not arguing that you should illegally steal everybody’s music—what you would do if you’re a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you’d hire a whole bunch of lawyers to go clean the mess up, right? But if nobody uses your product it doesn’t matter that you stole all the content. And do not quote me, right?

Interviewer: You’re on camera.

ES: Yeah, that’s right. But, you see my point. In other words, Silicon Valley will run these tests and clean up the mess, and that’s typically how those things are done.

And there it is. The billionaire philosophy. Ask forgiveness, not permission. The halting way in which he responds to being filmed shows you that somewhere deep in what’s left of this guy’s soul is a recognition that everything he’s saying is really fucking dangerous.

AI Investment Bubble

ES: And the final thing is that there is a belief in the market that the invention of intelligence has infinite return. So let’s say you put $50 billion of capital into a company, you have to make an awful lot of money from intelligence to pay that back. So it’s probably the case that we’ll go through some huge investment bubble, and then it’ll sort itself out. That’s always been true in the past and it’s likely to be true here.

So what if the investor class gets burned when this AI investment bubble bursts. He’s right that they can’t all be winners, and that it’s likely the big guys are going to win the day and consolidate their interests. There’s going to be a massive amount of blood in the streets among investors when the ROI simply doesn’t show up in their portfolios from the AI gambit. So what, right? They lost their shirts in 2000 and then again in 2008. And so they’ll lose them again.

That’s one way of looking at it.

Then there’s the concept of opportunity cost. If we’re 10-years ahead of the next closest competitor…If we’re likely to burn hundreds of billions of dollars of investment capital…If we run a high risk of creating a monster we can’t control in service of massive upside potential for a handful of tech billionaire investors… If we increasingly deplete our natural resources in this vague and dangerous pursuit…Then the question is, what was the opportunity cost associated with it? Where could these dollars otherwise have gone to improve our lives instead of enriching a handful with the vague promise that it might, maybe, someday, make things a little better for everyone, and only after we’ve learned to cope with the devastation we’ve wrought?

While we’re thinking out loud, is there anything else that could go wrong along the way?

Election Misinformation

Audience Question: How can we stop AI from influencing public opinion, misinformation, especially during the upcoming election. What are the short and long-term solutions?

ES: Most of the misinformation in this upcoming election and globally will be on social media, and the social media companies are not organized well enough to police it. If you look at TikTok for example, there are lots of accusations that TikTok is favoring one kind of misinformation over another, and there are many people who claim, without proof that I’m aware of, that the Chinese are forcing them to do it. I think we have a mess here and the country is going to have to learn critical thinking. That may be an impossible challenge for the U.S., but the fact that somebody told you something does not mean that it’s true. I think that the greatest threat to democracy is misinformation, because we’re going to get really good at it. When I ran/managed, YouTube, the biggest problems we had on YouTube were that people would upload false videos and people would die as a result. And we had a no death policy. Shocking.

They have no good answer for this. “The country is going to have to learn critical thinking.” Of all the patronizing bullshit this guy stuffed into a half-hour presentation this might be the most callously offhanded “not my problem” response. We’re going to make a mess here, it’s your responsibility to figure out what’s real and what’s not. What’s safe and what’s dangerous. Unf*ckers, this is what dystopia looks like. This is Orwell and Huxley collaborating on a script and filmed by Tarartino and Kubrick.


I know for many of you that this isn’t necessarily enlightening. For Unf*ckers, it’s validation. The world we’ve learned about together is real. You’re not losing your mind. They’re the ones that have lost their minds. And as we gnash our teeth, point fingers at one another and settle into our confirmation biases and election tribes, It’s also a really good reminder of who the enemy of the people really is.

They want you distracted and agitated.

They want you to have just enough money to eke out an existence, but stressed enough that you can’t take a moment to think for yourself.

Because if we all had that luxury and were exposed to the words and actions of monsters like Eric Schmidt there would be revolution in the streets. And now we know that they’re working toward a future where a handful of billionaires like Schmidt, Thiel and Musk control the information we see and hear, who we bomb and how, and which country’s natural resources we’re going to plunder.

Now that I think about it, maybe we aren’t the ones who should be mad about this. They’re not just ruining our lives and putting us out of work, they’re taking politicians’ jobs. That’s their jam.

Here endeth the lesson.


Image Source

Max is a basic, middle-aged white guy who developed his cultural tastes in the 80s (Miami Vice, NY Mets), became politically aware in the 90s (as a Republican), started actually thinking and writing in the 2000s (shifting left), became completely jaded in the 2010s (moving further left) and eventually decided to launch UNFTR in the 2020s (completely left).