Loose thoughts on this AI madness
We live in pretty uncertain times. Many notes to self on AI:
- There is something slightly awe-inspiring about what the latest models can do by itself. That much is undeniable. Their ability to go back and self-correct, question assumptions, and be very thorough about shortcomings, limitations and caveats feels like a real change from the previous iterations
- The pace of progress, and rate of change in abilities, has far outstripped what I had conservatively imagined, and outstripped what the harshest critics predicted. I still don't think there are clear examples of "novel" thinking or discoveries (but this might just be my ignorance), but the future and continued growth is all very uncertain and unknowable
- The consequences of what AI can and will do/replace seem to be changing with each iteration. I previously felt it might only be able to replace the mundane, coding-related parts of my job. Now I feel it has some scope to replace decent chunks of more thinking-related work. Not all, mercifully, yet, but some. Either way - the pace of growth is very high - seemingly fuelled by the single-minded, cult-like belief in AGI within the giants (OpenAI, Anthropic, etc.), stiff competition, and an unbelievable amount of hype + money.
- Now, all of this is just my personal experience. Questions we might want to be asking ourselves as a society, are: (a) is this the best use of our resources. Laissez-faire approaches would say that is unknowable, and so you just have to give people the freedom to push us in whatever direction feels most compelling, and that right know is AI. But journalists, researchers, critics should be asking these questions - and they are. There are very valid and real questions here about whether AGI is compatible with our clean energy needs (note to self: do more research here on actual numbers and costs).
- TODO: think about how previous massive, seismic shifts like this have felt and played out. If we think social media was one of those seismic shifts, where did that land us? Some of the same characteristics and companies/players at the heart of it: a few, very powerful and well-funded decision makers, pushing humanity forward in their vision. If they're able to build some super powerful model, how much will that concentrate power with them? Absolute power corrupts absolutely, etc. etc., read Careless People, ... But yeah more to understand here. Are there examples of where we've been able to harness the power of major seismic technology advances for good?
- Am sure there have been similar things with manufacturing automation causing job loss. Or computer software that runs regressions for you that you used to have to calculate by hand. Progress will continue, and there's likely always going to be pain and confusion around it. But instead of fighting the tide, maybe its better thinking about how to make it better? This is abundance again. But worth thinking about the 'first best' future you want to fight for, with your value system, and whether the systems we are building and investing in are compatible with that first best world. E.g., yes, salmon aquaculture allows us to feed more people, but if that is completely at odds with environmental sustainability, should we even be doing it? That depends on how you value peoples' immediate comfort vs. the environment. And then of course for something like climate change, its more a short term material gain for people vs. long-term discomfort.
- There is no such thing as a universal 'first best' world, but really the only thing we have is to stay true to our principles, beliefs, and lines that we draw - my lines being on treating people with respect, fairly, etc.
- What gives them the power ultimately though, is demand. This is not to be a luddite. But I'm starting to think it is still important to hold on to semblences of individual thought, and try thinking about things without defaulting to AI. Yes there are many things it is not worth spending my time thinking about (e.g., the syntax to plot this graph), but I fear the more I start blindly relying on it, the more power it gives to the giants, and the less power I have / freedom of thought and will.
- Btw - open source models really help with this. Even if they lag behind openai/anthropic by 5-6 months, it is super important that people keep building them and pushing them forward in the same way. And thank god for the passionate people that believe in open source projects, that give their valuable personal time to projects they believe will truly help push humanity forward. The people that release informational podcasts for the world to hear. Heck, the people that write blog posts like this?
- I don't know what it says about me that despite basically writing full sentences here, somehow adding a bullet point at the start makes my thoughts flow better. Maybe it feels more draft-ey? Maybe it's just a style I'm more used to? Anyways. Maybe one day I'll get Claude or ChatGPT to clean this up and remove the bullets. But for now, I prefer it this way. It's my little protest, my little push back, and belief in my messy independence.
Clearly, I have way too many thoughts on this. But I have run out of time to actually structure them properly or even write them cleanly. Staying true to the spirit of this blog, I am still going to publish this as-is. Just hope future me reading this doesn't cringe too hard, and is proud of current me for putting myself out there, doing this, committing to my self growth.
And yay I think that's the first post I've ended without being self critical about the messiness and lack of structure!