Here's what I told him. Probably it freaked him out so much he deleted it and wiped it from his memory, but hey...
There's no doubt that advanced software programs using AI and other complex techniques played a major role in the current global financial crisis. However, it's also true that the risks and limitations of these software programs were known by many of the people involved, and in many cases were ignored intentionally rather than out of ignorance.
To be more precise: the known mathematical and AI techniques for estimating the risk of complex financial instruments (like credit default swaps, and various other exotic derivatives) all depend on certain assumptions. At this stage, some human intelligence is required to figure out whether the assumptions of a given mathematical technique really apply in a certain real-world situation. So, if one is confronted with a real-world situation where it's unclear whether the assumptions of a certain mathematical technique really apply, it's a human decision whether to apply the technique or not.
A historical example of this problem was the LTCM debacle in the 90's. In that case, the mathematical techniques used by LTCM assumed that the economies of various emerging markets were largely statistical independent. Based on that assumption, LTCM entered into some highly leveraged investments that were low-risk unless the assumption failed. The assumption failed.
Similarly, more recently, Iceland's financial situation was mathematically assessed to be stable, based on the assumption that (to simplify a little bit) a large number of depositors wouldn't decide to simultaneously withdraw a lot of their money. This assumption had never been violated in past situations that were judged as relevant. Oops.
A related, obvious phenomenon is that sometimes humans assigned with the job of assessing risk are given a choice between:
- assessing risk according to a technique whose assumptions don't really apply to the real-world situation, or whose applicability is uncertain
- saying "sorry, I don't have any good technique for assessing the risk of this particular financial instrument"
Naturally, the choice commonly taken is 1 rather than 2.
In another decade or two, I'd predict, we'll have yet more intelligent software, which is able to automatically assess whether the assumptions of a certain mathematical technique are applicable in a certain context. That would avoid the sort of problem we've recently seen.
So the base problem is that the software we have now is good at making predictions and assessments based on contextual assumptions ... but it is bad at assessing the applicability of contextual assumptions. The latter is left to humans, who often make decisions based on emotional bias, personal greed and so forth rather than rationality.
Obviously, the fact that a fund manager shares more in their fund's profit than in its loss, has some impact in their assessments. This will bias fund managers to take risks, because if the gamble comes out well, they get a huge bonus, but if it comes out badly, the worst that happens is that they find another job.
My feeling is that these sorts of problems we've seen recently are hiccups on the path to superefficient financial markets based on advanced AI. But it's hard to say exactly how long it will take for AI to achieve the needed understanding of context, to avoid this sort of "minor glitch."
P.S.
After I posted the above, there was a followup discussion on the AGI mailing list, in which someone asked me about applications of AGI to investment.
My reply was:
1)
Until we have a generally very powerful AGI, application of AI to finance will be in the vein of narrow-AI. Investment is a hard problem, not for toddler-minds.
Narrow-AI applications to finance can be fairly broad in nature though, e.g. I helped build a website called stockmood.com that analyzes financial sentiment in news
2)
Once we have a system with roughly adult-human-level AGI, then of course it will be possible to create specialized versions of this that are oriented toward trading, and these will be far superior to humans or narrow AIs at trading the markets, and whomever owns them will win a lot of everybody's money unless the government stops them.
P.P.S.
Someone on a mailing list pushed back on my mention of "AI and other mathematical techniques."
This seems worth clarifying, because the line between narrow-AI and other-math-techniques is really very fuzzy.
To give an indication of how fuzzy the line is ... consider the (very common) case of multiextremal optimization.
GA's are optimization algorithms that are considered AI ... but, is multi-start hillclimbing AI? Many would say so. Yet, some multiextremal optimization algorithms are considered operations research instead of AI -- say, multistart conjugate gradients...
Similarly, backprop NN's are considered AI .. yet, polynomial or exponential regression algorithms aren't. But they pretty much do the same stuff...
Or, think about assessment of credit risk, to determine who is allowed to get what kind of mortgage. This is done by AI data mining algorithms. OTOH it could also be done by some statistical algorithms that wouldn't normally be called AI (though I think it is usually addressed using methods like frequent itemset mining and decision trees, that are considered AI).
2 comments:
Hi Ben,
I agree with your perspective. The assumptions underlying the pricing of mortgage-backed securities were very good at smoothing out idiosyncratic risk, but completely failed to take into account the systematic risk of a failure in the housing market.
As a result, investors who felt safe leveraging the heck out of these AAA-rated, AIG-insured securities suddenly had margin calls on their investments.
Because you need very complex mathematical and AI algorithms to even price these securities, when you need to sell them quickly, you will have a very hard time getting anyone to buy them at anything near their actual value. This problem could be avoided in the future by an AI program that provides provenance metadata of the calculation of price. This would provide the transparency and accountability that buyers and sellers need to trust the price of these securities.
Unfortunately, narrow AI programs that assume the existence of rational investors are going to do quite poorly for the next few weeks while fear is gripping the financial market. I'm curious how stockmood is doing with respect to the news coming out of the market right now. Must be wonderful training data.
Jeremy
--
I'd think maybe when physical sensor grids get integrated into financial news, novel AIs could become the physical capital costing experts. The "fear AI" would be tougher to gauge among many players. Maybe some social network AI bonuses but no parity until at least Turing Test pass.
Somewhere a while back someone mentioned primitive visual AI camera "mapping" is a basic hinderance in AI. Whenever holographic movies and video games become manufactured by Sony, the necessity of software AI agents should see capital solve this bottleneck. I dunno when. Should start seeing OLED 3D wallpaper games in ten years and holograms maybe ten years after that?
With robots with real world mapping capabilities, you have to be careful you don't accidently give them infantry hedgemony over us flesh and bones. The latter means stockpiling EMPs while making sure they don't stockpile neutron bombs and nukes.
Post a Comment