AI - Everything in moderation (pan metron ariston)
- Thomas Doukas
- May 29
- 8 min read
Updated: Aug 26

In recent months, we hear a lot about AI, the benefits and arguments and then all the counterarguments by the sceptics etc. so, some thoughts and inquiries from me, expectedly, will be about this! The truth is that in principle, AI sounds like a brilliant tool and I don’t want to regurgitate all the things we read and hear on social and other media about replacing humans and taking over our jobs etc. This might be the case in some instances, but perhaps this is what evolution is about? We could say the same many decades back when computers replaced typewriters or when machinery replaced the domestic textile production. However, in the case of AI, there might be a price to pay on a different level. Without pretending to be an expert (far from it), I want to present a couple of examples as an illustration of everyday use and to relay some thinking pointers. Few months ago, a newly qualified colleague asked me to write a testimonial or recommendation for them for an employment agency and I agreed, so I scribbled down some key words and phrases that characterise his work and his approach and methodology etc.; my recommendation looks good, but I decide to run it through an AI platform just for the sake of it. The AI tool asks me to provide some key words, which I did using the ones I’ve already written, and then produces a similar paragraph to mine but in more embellished and let’s say proper English, but nothing radically different. I decided to combine and to use mostly my own words with some of the text produced by that AI tool. So, this is a good example on how AI can be useful but with ‘yours truly’ also using his own mind and thoughts as the foundation and the starting point for this task. Except not when I ask AI to produce a whole book for me based on specific requirements I feed into the tool. Leaving aside the moral issues of ‘cheating’ and ‘authoring’ something that’s not my own work, the latter will eventually make me rely on the tool even for the simplest of tasks. We know very well it’s not difficult to fall into a comfort zone where a machine/bot/whatever does the work for us, or the thinking for us for our purpose e.g. shopping and entrainment suggestions. Technological evolution during the last few decades has taught us that it’s very easy to become lazy and complacent with such new tools. The selling point is that such tool is going to free more time for us, but to do what? Is this development yet something else to hook us more on social media and distance us from thinking for ourselves? To create the fake illusion of being in control of our time and gaining more free time to what end? To engage more, virtually, with other human beings we don’t know, and to read and reuse their thoughts or any other thoughts that they are served to us via these platforms, losing then our own ability to evaluate and think for ourselves? Are we becoming numbed? There’s a danger we end up not having any time and space to think or reflect on things because we are either communicating constantly with other people or we are hooked on a screen all the time; so, there is not really time left for ourselves. And is this going to replace our critical thinking and our ability to be creative and to find our own words and ideas when we need to compose something? To claim that with the use of AI we are going to become idle and less creative might be a valid point; if it’s suggested that AI is going to eventually (and perhaps gradually) replace our thinking, so without wanting to appear to be averse to technology, it might be the case that we are becoming inactive if we are not cautious. Ultimately, can AI eventually be used to manipulate my thoughts? Quite possibly (and probably) and that is why we need to keep our critical thinking and decide how and by whom we get influenced, this being a conscious and permitted decision. We already know our brains are very keen on taking shortcuts, e.g. stereotypes are a semi-nonsensical way and a shortcut for our brain to categorise people and to understand groups and their generalised characteristics to avoid elaborate thinking (making assumptions based on looks, racial information or how someone might sound are also quick shortcuts to avoid deeper thinking). It is known that some forms of AI are falling into this bias trap too, since they replicate human behavioural patterns, check out for example Joy Buolamwini, a computer scientist at MIT, who has extensively researched the social implications of artificial intelligence and bias in facial analysis algorithms (the software finds it hard to identify dark-skinned women). Critical thinking is labour and time intensive, but you won't get other people or groups of people to tell you what to think. We are all avoiding such effort and that's why we fall victims of marketing tricks etc. That's often why we imitate each other because we don't bother thinking for and making decision for ourselves. I can’t help but thinking all these things we believed we invented and controlled, only to realise, by just looking at human history over the millennia, that we really have become slaves to those supposed tools. Yuval Noah Harari is his book ‘Sapiens – a brief history of humankind’ discusses the case of wheat and how we fell into the illusion that we domesticated the wheat but instead the wheat has domesticated us (the argument being that we stopped being hunters and gatherers and we’ve settled down to areas where we are able to grow wheat as our basic staple). We also thought we domesticated the machine/the car and those have domesticated us instead; the same is with mobile phones and many other technological tools available to humankind; and maybe the same is going to happen with AI (or it’s happening already); we think it’s a tool but at the end we become slaves of the tool under the illusion of being in control, but we are really not. I am also making here a connection to something that keeps coming to mind, i.e. the Promethean Curse. According to Greek mythology, Prometheus who was a Titan, stole fire from the gods and gave it to humans. As a result, he’s been considered the humankind benefactor the idea being that fire will bring arts, technology and civilisation and humans will become as powerful as the Olympian gods. Zeus, however, was so angry he had Prometheus chained to the mountain Caucasus for eternity and sent an eagle to feed on his liver every day. His liver would re-grow each night only to be consumed the next day. The curse, however, goes much further than Prometheus’ eternal suffering; humans were also condemned to never completely control fire so it would destroy our homes and crops and lives. Understanding the dimensions of this myth and the learnings to take away, one cannot help thinking how much the notion of moderation is at play here and everywhere else that involves human behaviour (Pan metron ariston = everything in moderation). Are we really able to possess a tool and use it as a tool and without being the slave of such? Staying within the ancient Greek realm, in De Anima, Aristotle distinguish the mind (or intellect – noêsis) into passive and active; the former receives ideas and concepts from external sources, almost resembling AI, while the latter (also called poietikon or active), in contrast, coincides with the Aristotelian God, in the sense of re-thinkable self-awareness. In other words, while the passive mind receives information from the senses and external stimuli, the active mind processes such information and reasons. Although this is a very obscure and widely disputed subject, following this line of thought, AI could be the passive mind, so used as a ‘tool’; or it could be likened to the active intellect taking up its own form and freedom. But could AI stimulate the quest of divinity, in the same way the active mind does, being an existential desire and not merely as information? Could AI fall in love, or conjure death? To wish and to hope, to pray, to experience grief? Maybe it can simulate it. A final thought about AI and coaching. Last year, I took part in a demo about AI and how it can be implemented to assist coaching. At the presentation the hosts, simulated a coaching session where the AI tool was given some general info about the client (a mixture of demographic and other details), and it has generated a possible story about them, with suggested questions, goals, outcomes. Considering the newness of the tool and its status (being a machine) the result was quite impressive. On second glance however, I couldn’t help thinking that it resembled what fortune tellers do (hand/palmistry and Tarot readers), i.e. based on general observations and random assumptions about a person, they make up a story about the person that’s more or less accurate and they adjust it through the process. The methodology is to, typically, attempt predictions on matters such as future romantic, financial, and childbearing prospects or often character readings. These of course, are universal and human aspects that would reasonably get the attention of most of us. Similarly, other, external, and random observations might be utilised, for example, if someone is wearing a wedding ring, or their general attire, to draw conclusions about the person and create the narrative. There’s often a reference to a dead person, someone that is missed a lot (who hasn’t?) and through conversation more information is gathered to drive the story to more solid ground. Of course, if we look closer, we realise that that’s what we also do, as coaches or as people, i.e. we engage in conversation to find out more, we sometimes make assumptions even if we don’t declare those out loud and we create a story about the person. Through the conversation and questions, we build more solid foundations about the person and the story becomes reality. So, AI tools, really do what we are doing too; they replicate our methodology. I guess what’s missing, is empathy and the human understanding. But do we need this? I’ll leave this question for you to answer. We know our brains are already avoiding hard work and deep thinking; and AI comes in as a handy tool to help us. We also know that this tool brings along our own, very human ideas, biases and methods and all in all it does a great job mimicking those. However, if we let it take over, we may run the risk of mental inactivity, of letting the machine do all the thinking for us and consequently even lose the ability to make decisions and to cognitively engage in the rational evaluation of various options, possibilities and choices. Ultimately, we may even lose these softer, more human aspects we claim make us different from the machine. With all this in mind, it’s important to be cautious with all that it’s generously offered to us as a panacea to our modern era problems and to consider why it’s important to use our own thoughts, words, views and opinions and to engage our mind at a deeper level; to trust your intuition, creativity and judgement. All things in good measure then! For some interesting thoughts and opinions about the AI debate, check these links below:
https://www.theguardian.com/books/2014/sep/11/sapiens-brief-history-humankind-yuval-noah-harari-review In her new book, “Unmasking AI: My Mission to Protect What Is Human in a World of Machines,” Buolamwini looks at how algorithmic bias harms people and recounts her quest to draw attention to those harms: https://www.poetofcode.com/



