The benefits were obvious. It meant a Mistral program user could return to a previous input and edit it, then continue again from where they had left off.
The problem with that, from a programming point of view, is that each input, in a succession, can have a bearing upon what can 'qualify' for a subsequent input. In other words, an early input can define the 'range' limits of subsequent inputs. Therefore returning to change an early input can make a nonsense of later user entries. Though not always for all entries.
Simply clearing all later input entries following such an edit would be unacceptable as making the parallel operating system practically useless. There would be little to be gained over the DOS versions. We therefore had to develop a system which we described as 'Dynamic Error Trapping'. The clue is in the word 'Dynamic'.
The task was daunting. Let's face it, a string, or a row, of just 7 short integer numerals can identify the location of every human being on the planet, including their age and gender. Add an eighth and you can identify their hair colour plus eye colour, along with how many dental fillings they might have! Nine numerals could in theory identify the location of every grain of sand on the planet, including its chemical compound, colour, mass and even depth.
Now deal with forty inputs! That is what we were up against and at first it seemed like mission impossible. Like trying to map infinity for all intents and purposes.
At the outset, Mistral cheered up its small team of highly qualified and skilled programmers, comprising an Honours Graduate, three Professors, and three professionally qualified members with a collective century of RAC experience, by declaring that Mistral software, if it was only 99.9% right would be useless as a viable, marketable product. It had to be 100%.
We do not believe for one moment that so called 'Artificial Intelligence' could ever achieve what Mistral has managed to achieve for the niche market it chooses to serve. At least not in the user program session times of, in some cases just a few seconds run time in which Mistral can offer a guaranteed, audit trailable, accurate result. Importantly, a result for which Mistral is held accountable. The only way AI could achieve this might be by plagiarising Mistral. And frankly that is never going to happen. At least not in our lifetime.
The other challenge facing the likes of ChatGPT (AI) is that how will it even know what input parameters it will need in order to compute an entirely accurate and reproducible result? And from where or from whom should it seek the answers to those parameters? Much of those answers are proscribed, commercially sensitive data and only available to qualifying, that often means paying, account holders. If ChatGPT hacks through a Paywall then it is theft. Pure and simple. For which Mistral would certainly sue, for tens of millions in reparations. And no, Mistral does not give a damn about how big OpenAI (owners of ChatGPT) or Microsoft Corporation or Alphabet Inc (owners of Google) might be. Remember; David killed Goliath!
Furthermore, what logical GUI (Graphical User Interface) will AI assemble and display to the researcher looking for answers? Will that same GUI be repeated for the same user next day, or a week or a year, or ten years later? We think not. Frankly, if we were say, responsible for the RAC budget of a large Supermarket chain, running into millions of Dollars or Pounds or Euros per month, then we would wish to know our spend was being directed to technology and data sources that are traceable and where the providers of that technology and data are accountable!
AI or not, don't even think about asking! Even less so, don't even bother trying to data 'scrape'. You will not succeed!