3aIT Blog

Someone holding a lighbulb with the sky in the backgroundOur experience of using AI to assist with some of our day-to-day tasks has been pretty positive. These systems are particularly suited to helping with the "rules" based industry we belong to, especially on the code side. However, that doesn't mean there's no risks here, and those risks aren't necessarily the obvious ones...

Much has (correctly) been made of the issue of AI "hallucination". This is when it confidently gives you the wrong information, or even just makes stuff up entirely. To give you one example of this from our experience, we have played around with adding "AI Suggested Answers" to our internal ticket system. This is currently of very limited use for reasons too detailed to cover here. However, one of the instructions we gave the AI was to include links to sources where it has used them. It complied with this, but every single link didn't exist. It looked like it would exist. It would cite a page on the Microsoft site or the Epsom site or similar, but every one of those would go to a page not found on that company's website without fail.

Someone asleep at a computerWhat about less obvious risks? We were interested to read an article this month that noted that time spent doing "boring" tasks between the more interesting stuff isn't necessarily always bad. It may feel like it at the time, but if you remove the stuff you can do with your eyes closed entirely, that removes mental "downtime" to be subconciously planning the next bigger job. The next big task is always on you.

Another risk for those already well versed in their jobs is that reliance on AI may cause well-honed skills to become rusty. If you're doing something most days, you tend to remember how to do it. If you're asking AI to do it every time, should you need to remember how to do it yourself again in the future, that may become a lot trickier...  It's sort of the "What's the point of being able to read a map when you have sat-nav" problem, but scaled up to vastly more areas of knowledge.

Dead End signFinally (for now), on to what may become the biggest risk here in the longer term. The initial panic about AI taking all of our jobs seems to be overblown - at least for now. It's clear that even where these systems are a good fit for your business, they are of infinitely more use in helping experienced people work more efficiently than in the hands of an amateur (or in no hands at all) who doesn't know what questions to ask, or what "right" looks like. Therefore the risk here seems to be not in the AI taking our jobs, but in it removing jobs that don't currently exist. If it's enabling a good existing employee to work 4 times quicker, that suggests that's 4 people you won't need to employ. Which is true... until that employee is gone. Then you have a problem because there's been no-one building up the experience needed to replace them one day. That employee was able to work at the level they were working at because they already knew everything based on their pre-AI job experience. If you have no-one new gaining that experience over time, that's a ticking time bomb...

Do we have answers to any of this? Not really. It's going to be a balancing act as we all feel our way through these shifts in the way we all work. However, the first step is being able to identify where the problems might be and to be aware of them, so we hope these musings have been of some assistance!