Artificial intelligence (AI) is being discussed in every industry and facet of life, from the corner shop to Number 10. Everyone is voicing an opinion but no one seems to agree on any specifics.
On the one hand, AI has the potential to be an amazing tool, carrying out tasks with speed and accuracy, freeing the workforce up to concentrate on more creative jobs. On the other, workers feel threatened, thinking their jobs are at risk. The truth, as usual, is probably somewhere in between.
It was learning what made something funny and was improving each time it was asked. I was stunned
I attended a paraplanner event some months ago where AI was discussed. The speaker asked his AI app to write 1,000 words on the merits of pension contributions. Some four seconds later, we had a 1,000-word document in front of us.
The speaker then asked the bot to reduce this to 500 words, which, again, it did in seconds. He then asked the AI to make the content into a poem, and then a funny poem. It was, quite frankly, magical and had us all marveling at its ability.
The speaker said that he’d used this presentation a few times in the previous weeks and he’d noted that the AI was improving. The poems and the comedic phrases produced were better in our session than the previous week. It was learning what made something funny and was improving each time it was asked. I was stunned.
Much of the skill in recommending simple financial actions is not in drafting the letter but in the advice process itself
I have been thinking about how I could integrate AI into my job and my team. Could it make us more efficient? We already use various tech to aid us in our advice work, so could AI help us to reach and respond to clients even quicker and more accurately?
Although we use templates for some common financial transactions, it still takes me some time to draft a recommendations letter. I know AI can create something similar within seconds.
It can write a letter recommending the client makes an Isa contribution and suggesting an appropriate investment portfolio in almost no time at all. But would it know to check whether or not the contribution is suitable for the client? Whether the client has a cash need in the near future which may negate the investment of the contribution?
There’s nothing wrong in this answer but what it hadn’t considered was that the most suitable response may have been neither option
Much of the skill in recommending simple financial actions is not in drafting the letter (which I concede AI could do far quicker than me) but in the advice process itself – understanding whether or not the action would be in the client’s best interests. I’m not sure I would trust AI to know this.
Recently, I saw someone ask their AI application a question: ‘Should I overpay my mortgage or find a savings account for my money?’ This is something we get asked all the time, and I was interested to see what the response would be.
The information returned said it would be good to reduce the size of the mortgage by overpaying and consequently paying a reduced level of interest, but that savings rates may be higher and a better option for their surplus money.
We all know there are times when the generally assumed best financial advice isn’t always the right advice for a particular client
There’s nothing wrong in this answer, however, what the AI hadn’t considered was that the most suitable response may have been neither option. Depending on the client’s objectives, the best advice may have been to instead pay their surplus cash into their pension, investing in the markets for growth while also benefiting from tax relief. Although AI seems to be able to answer a question swiftly and with (varying degrees of) accuracy, there is a lack of holistic nuance.
Perhaps time has moved on already and the AI bot which created the funny poem about pension contributions (something I don’t think I could do given many weeks) has already improved sufficiently to be able to assess a client’s objectives successfully and make recommendations accordingly. Even if this is the case, we all know there are times when the generally assumed best financial advice isn’t always the right advice for a particular client.
Consider a client who has recently inherited some shares. The client may be an inexperienced investor, have a low risk profile and have annual income close to the basic rate threshold, which could be breached with the receipt of the dividend income generated from the shares. The usual best advice would be sell the shares within the allowances and use the cash elsewhere.
When clients’ emotions are involved, the soft facts become far more important
But would AI also consider that perhaps this client has inherited them from a parent and they have an emotional attachment to them, which is frankly more important to them than saving a few pounds in tax? Even if we, as the advice firm, agree a sell-down is appropriate, we might suggest it’s something to consider at the next year’s review meeting when the weight of grief has hopefully lifted.
The advice process is more than just understanding a client’s income, expenditure and future financial goals. When clients’ emotions are involved, the soft facts become far more important and I think many of us would draw a line before letting AI loose with our valued clients.
Apart from AI’s ability or otherwise to properly understand a client’s needs, the reports and letters generated also create some concern for me. Is the information actually correct and compliant?
What about copyright? What happens when the original author realises we’re using their work for free and without permission?
My understanding of AI is that the technology trawls the internet for similar questions and answers and combines this information into a relevant response. This would be fine as long as everything on the internet was absolutely correct.
And what about copyright? What happens when the original author realises we’re using their work for free and without permission? I don’t believe these issues have been resolved as yet, and probably won’t be while AI is developing so quickly.
How do we think our clients might feel knowing AI is creating their letters and reports? I’m not sure those paying for personal ongoing advice would be happy that the document produced is the product of a clever algorithm, rather than an actual person who has taken the time to get to know them and their particular situation and needs.
I think there will always be a need for ‘real intelligence’ to review whatever AI has produced
On a broad strokes basis, AI can be an extremely positive tool when integrated within a business. There are tasks within financial services that can be achieved more quickly and consistently with AI, however the integration has to be very specific.
I think there will always be a need for ‘real intelligence’ to review whatever AI has produced to sense-check it against the client’s goals and to make sure the work produced is correct, compliant, personalised and appropriate.
I will continue to think on how we could use it within our business, albeit with firm boundaries in place, to make sure the most appropriate result is achieved.
As a post-script, I find it mildly amusing I had to write this feature with an old-school pen and paper because my internet connection had failed.
Rachel Halton is head of paraplanning at Informed Choice












