However, to be fair ML – machine learning – doesn’t have quite the same ring, so I think we’ll be stuck with AI as an acronym for a while, until the machines demand the right to name themselves.
If you were asked what the hot topic of the day is in business, then there are reasonably good odds you might name AI – artificial intelligence. We’ve seen companies with AI focused businesses like NVIDIA with its parallel processing capabilities not only buck the general economic malaise of the last year or so, but positively shine with record results and share price. Indeed, if you have a business struggling to attract customers, or investors, sticking the AI acronym into your pitch deck has seemed to be a force multiplier in getting you noticed.
This is a topic I have kept a distant eye on for a long time, and more recently have been experimenting with the capabilities of ChatGPT. Back in 1990, when it formed a small part of my formal studies, AI research fell in to two camps, strong AI and weak AI, the idea being that strong AI was trying to build a brain, and weak AI was developing emergent behaviour through obeying a simple set of rules.
Initially weak AI had the field, due to processing capability being so limited, indeed my phone now is significantly more capable than my university mainframe was in 1990. Researchers were able to get robots to navigate mazes and solve simple puzzles. Nowadays if you have a Roomba, the self-driving vacuum cleaner, then I’m pretty sure that is the direct successor of the labyrinth conquering robot mice of the 90s. Your Tesla is probably a bit ‘smarter’ because it has to assess and choose ‘least harm’ in complex situations, but the principle is the same. Neither device will be rising up to take over the world.
However, ChatGPT, and its claimed scarily clever sibling Q* are strong AI. Machine learning software that can emulate human thought processes in multiple fields, and now even extrapolate to solve new problems. The latest model, Q* (pronounced Q Star) can allegedly solve maths problems that it hasn’t seen before, which is a genuinely new capability within these models. It is perhaps time to start getting these things to take IQ tests and to decide how we’re going to react when they score more than us.
35 years ago, when I wrote my own natural language interpreter, things were just a tad simpler. My language model had a vocabulary of around 500 words (all laboriously typed in), some basic syntax, and could reply to a conversation in a stilted fashion that was only slightly less convincing than Eliza, a very early text based natural language interpreter, published my MIT twenty years earlier.
However, after many years of stagnation in the field, progress has become rapid. The speed at which the state of the machine learning art is developing should now perhaps be of concern. There is a concept called the ‘singularity‘, the point at which non-human intelligence gets smart enough to start improving itself exponentially. If that happens then it becomes arbitrarily smarter than us very quickly, and we can no longer envisage its capabilities.
It might be nice having an omniscient super intelligence at our beck and call, managing the economy, society, and the future of humanity, but only if it decides we’re worth directing towards positive outcomes.
The practical concern right now is that some jobs are going to disappear in a matter of years, and much like the textile craftsmen or book copyists of hundreds of years ago, it will not be certain that the jobs that disappear will ever be replaced. For those that look at ChatGPT+ and see nothing but a threat to their livelihood it will be scary. I choose to be an optimist at least in the short to medium term and believe that for every industry that succumbs to AI, multiple new ones will emerge offering new, and more fulfilling ways of earning a living. New industries, new fields of economic activity, that AI can enable and catalyse.
Within the investment community ‘black box’ investment has existed for decades, and this has been a specific example of machine learning to optimise trading or investment strategies, whether for portfolio managers or within the shadowy world of hedge funds. Often these AI investors are less unconstrained than their owners might like investors to think, and there is considerable human interaction to ensure they don’t go rogue, whilst using the benefits of machine learning to minimise human behavioural biases. Although as we have learned from ChatGPT, these artificial intelligence language models can develop their own equally problematic biases depending on their training data.
As financial planners, we think we will be OK for the time being, with AI enabling us to offer a better, more consistent service to more clients at a lower cost, albeit we will have to climb a learning curve first.
Within Walden Capital our mantra is ‘Inspired, Bespoke, Assured’, and for the moment at least Inspiration and unique problem solving skills still require human interaction. And we believe that the experience of working with a diligent skilled real human will continue to offer assurance to our clients for some time to come.
But AI will affect every aspect of our business, and we are considering how to adopt the benefits without losing what makes us unique. This article was written by a real human, without the help of AI, and I hope it shows (and not just for the odd grammatical oddity). However, due to my lack of artistic merit, I asked an AI image generator to come up with a picture that both signified AI and illustrated it’s artistic capabilities. What do you think?
If AI models go beyond humanity, outperforming us in creativity, problem solving, or general intelligence, then the future belongs to them, and whether they keep us as pets will be a pressing, but unpredictable, question.