Regarding AI, I think for the
types of questions we would be asking it would return better information with a series of targeted web searches using more than one and preferably multiple
search engines (there are significant differences between
Google and DuckDuckGo
search results, for example), as well as direct specific site searches (such as searching various sites like
Wikipedia (which is also full of inaccuracies), academia,
history sites originating in multiple countries, etc. Anything produced by AI has to be programmed anyway and the information is out there or it would never even show up. Beyond that, there is no way to actually know if what the AI reports is accurate. I have been following the latest developments in this
area with some concerns related to both accuracy and promoted narratives. Doing targeted searches would take longer, but also allow us to have a basis for making a judgment on how accurate the information is rather than taking the AIs word for it.
This reminds me of the 1980s when AI was also "going to put everyone out of
work and take over the world." I worked for the US Army Signal School that
had an office devoted to AI with a couple of PhDs that I
had a management role in. Some of you may remember the M1 Shell. Back then, these were rule based programs that could, as one example, diagnose encephalitis better than most doctors and as
good as the experts. The rules were, of course, derived from the experts and this technology is widely used today and it useful. But, it didn't take over the world. It perhaps has put a few folks out of jobs, but not doctors. LOL. I think the biggest danger with some current directions with ChatGPT is more along the lines of driving particular narratives that may or may not have real basis in fact. But, that is already happening without ChatGpt across a wide spectrum. And the jokes I have asked it to write aren't putting a single stand-up comedian out of business LOL. I have been fortunate to be able to get access to a version of this latest
software (based on ChatGPT) and have not been at all impressed.Or rather I guess I should say it is way overhyped, much like the AI from my younger days when the Army was worried about it and looking at both the offensive and defensive aspects of its use. They quietly dropped that office after a few years, although there is no doubt that the Signal Corps and Cyber Command is knee deep in it again today.
Virgil