Arguably the single most important problem facing democracy today: getting humans from different segments of society (across political party, wealth etc) to spend any amount of time engaging in real time civil dialogue. I suspect there is just no substitute, and any democracy's ability to function at all will be severely limited without such dialogue. Possibly we could get by with high quality public quant polling data on truly broad set of subjects, supplemented with high quality qualitative polling data. But the time when social media could be a good windsock...I think that was just a fleeting blip. So, yes, LLMs will enable high quality corporate shadow censorship/propaganda and may generally make it harder to obtain social consensi that aren't horribly wrong---but it was going to be hard anyway and I'd say there's still even money we will actually figure it out if democracies keep tinkering for another couple centuries.
On the other hand, non democratic power structures have completely different needs. Right now it still seems like such structures are even more susceptible to over concentration of power than democracies, leading to capricious decision making and poor decisions. I wouldn't have been as confident in this 10 years ago, but Xi Jinpeng's consolidation of power and subsequent incompetence has greatly reduced my fears of stable totalitarianism. Nonetheless, there may be mechanisms yet uninvented that yield a stable totalitarianism where LLM-based shadow censorship/propaganda leads to many profound misunderstandings about reality in the vast majority of humans. It is indeed a spooky thought.
Yes, LLMs can and likely will / do contribute to censorship :/ The only silver lining is that people will figure out ways around them. One of the nice things about our human language is, as Godel put it, it can't be both complete and consistent! So no matter how fancy an LLM is, it can't catch every possible way of expressing a particular concept.
The tougher thing is LLMs making rich "paper" trails -- like info ops that come with pre-built websites, fake articles, even faked footage of something happening. This is already a problem, and will only become more of a problem with time... right now my gut feeling is that people will just get a lot more skeptical about anything they see online, and/or spend a lot more time in walled garden type spaces like Discord.
I've recently started talking about LLMs as a tool, and that feels about right for me. A tool can be handy, can be useful... can be dangerous and socially disruptive. But perhaps treating it as a tool can make it feel a bit more manageable.
I think that "tool" is the correct frame—and more useful since it prompts people to think "well, how could this tool be used *right now*"—the sort of question that is more useful in the short term.
I hope you're right that people will become more skeptical about anything they see online—though experience backstopping family forays into youtube wellness zones makes me worried about the less-savvy 90% of the population. I have a fun short story premised on this sort of future, of a useless-for-practical-purposes internet leading to a rebirth of the old-school private investigator—but it's been sitting in editorial limbo for over a year now, so I can't link to it. Someday!
Arguably the single most important problem facing democracy today: getting humans from different segments of society (across political party, wealth etc) to spend any amount of time engaging in real time civil dialogue. I suspect there is just no substitute, and any democracy's ability to function at all will be severely limited without such dialogue. Possibly we could get by with high quality public quant polling data on truly broad set of subjects, supplemented with high quality qualitative polling data. But the time when social media could be a good windsock...I think that was just a fleeting blip. So, yes, LLMs will enable high quality corporate shadow censorship/propaganda and may generally make it harder to obtain social consensi that aren't horribly wrong---but it was going to be hard anyway and I'd say there's still even money we will actually figure it out if democracies keep tinkering for another couple centuries.
On the other hand, non democratic power structures have completely different needs. Right now it still seems like such structures are even more susceptible to over concentration of power than democracies, leading to capricious decision making and poor decisions. I wouldn't have been as confident in this 10 years ago, but Xi Jinpeng's consolidation of power and subsequent incompetence has greatly reduced my fears of stable totalitarianism. Nonetheless, there may be mechanisms yet uninvented that yield a stable totalitarianism where LLM-based shadow censorship/propaganda leads to many profound misunderstandings about reality in the vast majority of humans. It is indeed a spooky thought.
Very good points.
Yes, LLMs can and likely will / do contribute to censorship :/ The only silver lining is that people will figure out ways around them. One of the nice things about our human language is, as Godel put it, it can't be both complete and consistent! So no matter how fancy an LLM is, it can't catch every possible way of expressing a particular concept.
The tougher thing is LLMs making rich "paper" trails -- like info ops that come with pre-built websites, fake articles, even faked footage of something happening. This is already a problem, and will only become more of a problem with time... right now my gut feeling is that people will just get a lot more skeptical about anything they see online, and/or spend a lot more time in walled garden type spaces like Discord.
I've recently started talking about LLMs as a tool, and that feels about right for me. A tool can be handy, can be useful... can be dangerous and socially disruptive. But perhaps treating it as a tool can make it feel a bit more manageable.
I think that "tool" is the correct frame—and more useful since it prompts people to think "well, how could this tool be used *right now*"—the sort of question that is more useful in the short term.
I hope you're right that people will become more skeptical about anything they see online—though experience backstopping family forays into youtube wellness zones makes me worried about the less-savvy 90% of the population. I have a fun short story premised on this sort of future, of a useless-for-practical-purposes internet leading to a rebirth of the old-school private investigator—but it's been sitting in editorial limbo for over a year now, so I can't link to it. Someday!