Skip to content


ABRAM WHITE PAPER: TESTING AND EVALUATING CHATGPT: A PERSONAL HISTORY FROM RETRIEVAL TO TRANSFORMERS

It’s a long post but you can also download the 37 page PDF here: White Paper The Librarian and Testing Innovation Sep 2023

White Paper: Testing and Evaluating ChatGPT

A Personal History from Retrieval to Transformers

White Paper The Librarian and Testing Innovation Sep 2023

By: Stephen Abram, MLS, CEO, Lighthouse Consulting, Inc.

© Lighthouse Consulting, Inc.

Table of Contents

Page 3:                This Librarian’s Journey of Testing New Search Innovations: Introduction

Page 5:                The Meat of this Paper: How has Testing of New Innovations Evolved?

Page 6:                The Big 10: Testing Advice

Page 7:                Testing the Generations of Retrieval to Search to Transformer Prompt: A Short (and Personal) History of Search

Page 7:                OPACs: Library Online Public Access Catalogues

Page 8:                Online Databases (Citation Only)

Page 9:                Online Databases (Fulltext + Citation Only)

Page 9:                The Original Internet Search– Gopher, Archie, Jughead, and Veronica

Page 9:                The Internet Directory Phase

Page 10:              The Internet adds the Worldwide Web (WWW)

Page 10:              Google and Algorithmic Search

Page 11:              The Arrival of Images, Sound, and Video

Page 12:              The Social Media Spectrum

Page 12:              Commerce

Page 13:              Enterprise Innovation Arrives: Federated Search and Intranets

Page 13:              Artificial Intelligence and Machine Learning

Page 14:              The Four Types of intelligence

Page 17:              The Future?

Page 17:              Artificial General Intelligence or AGI

Page 18:              Artificial Emotional Intelligence (AEI)

Page 18:              My Advice for Artificial Times

Page 22:              Appendix: Glossary

This Librarian’s Journey of Testing New Search Innovations

Introduction

I first entered my professional librarian’s journey about 45 years ago, in 1978, when I started my MLS program.  I’ve had the gift to participate deeply in the development of research, professional, and library search systems and innovations.  This front-row seat included being involved in early innovation, design, and leadership roles with some of the first fulltext databases, hypertext search systems, publishing system re-design, online services, federated search, commercial CD-ROM and online databases, web experiences, and more, and, of course, the library world.

I was able to see many ‘killer’ changes that required adoption (and, indeed, cultural change) at the vendors I worked with and in the markets and clients we introduced these innovations to, including the major transitions from:

Search with Fields Retrieval with fields, tags, and algorithms
Citations and Metadata Fulltext (with or without metadata)
Search Find
Search & Find Research and Discover
Download Streaming
Dedicated Device Mobile and Any Device (including gaming, TV)
Keyboard Entry + Image, Voice, and Haptic entry
Text Only + Graphics, audio, visuals, video, streaming, etc.
English Language Mostly Just about Any Language (including Klingon)
Just in case Just in Time and Just for Me (Personalization)
Content Silos Integrated Digital Experiences that “Flow”
Content Curation & Artifacts Workflow, Research Flow, Commerce, Translation
List Output Conversational (chat) output

If you recall our own development and transition as information professionals, you will recognize these pathways and have a shared experience.  And, the pandemic, in my opinion, hastened many market transitions exponentially – possibly 15 years of change in the span of three!  No wonder we’re tired!

I’ve seen the rise and fall, or the evolution of the successful players that provide the tools we use and who create the content we need.  Here’s the usual pattern:

I’ve witnessed the herky-jerky battles for copyrights, linking, patents, regulation, access, licensing, identity, metadata standards, market share, and so much more.

I’ve seen pain, resistance, leadership, and compelling visions as we travelled through this Renaissance of information and learning.  It’s quite the ride!

I’ve also seen the learning curve by my professional colleagues in adapting to and internalizing these innovations in their organizations and for society at large.

To my mind, there have been many small and HUGE disruptors.  In my view, the largest disruptors were the transitions of researchers, users, and vendors to adapt (or not) to:

  1. The Internet and the Web through their dot-zero (Web 1.0, 2,0, 3.0) plateau changes.
  2. Digital content itself including transitions to Digital First and Mobile First.
  3. Artifact as the main node (i.e., article, song, movie, picture, graphic) and not collections.
  4. And today, the emergent artificial intelligence and machine learning transformers.

These four disruptors changed the world and continue to change, as the movie title suggests, ‘everything, everywhere, all at once.’  We used to use the phrase paradigm shift and found it useful in the last century.  Now, paradigms shifting are business as usual.  Shift happens, and it happens in fits and starts.  Indeed, it is only in retrospect that it looks cohesive – this is called retrospective coherence.  At the time, things could have progressed in a myriad of directions.  For the purposes of this piece, we might find the word “mindset” helpful.

A mindset is an “established set of attitudes, esp. regarded as typical of a particular group’s social or cultural values; the outlook, philosophy, or values of a person; (now also more generally) frame of mind, attitude, [and] disposition.” It may also arise from a person’s worldview or beliefs about the meaning of life.  A firmly established mindset could create an incentive to adopt (or accept) previous behaviors, choices, or tools, sometimes known as cognitive inertia or “groupthink.” Within these concepts, it may be difficult to counteract its effects on analysis and decision-making.[Wikipedia]

I am seeing a repeating pattern today, that I’ve seen many times before in my career.  It revolves around initial reactions to the ChatGPT or more broadly artificial intelligence and machine learning as applied to libraries and research.  It ranges through a spectrum from pearl clutching through wild hype.  I see this as a pretty standard process for mindset change.  I thought to myself, “I’ve seen this before.”  This pattern has been repeated every time a new transformational innovation arrives on the scene.  And, this time, it’s not just an information professional experience, it’s everybody.  However, as librarians and information professionals, we have a duty to learn, study, and have a professionally informed opinion – no matter how much it scares or excites us emotionally.  Our emotions are valid, but they must be tempered with learned perspectives, experience, facts, and knowledge.

Over time, I learned a lot of lessons that I can now share with my colleagues and clients about how to view and test major new entrants in the technology and information spaces.  The sense of humanity and tools to adapt to these past changes (sometimes many of them in competition) were successful when great minds, great conversations, and insights came together to invent solutions.  Often, these innovations had to duke it out Darwin-style in the marketplace and only a few survived the evolutionary battle.  All of them were expensive and needed capital to move forward – sometime military and sometimes private.  Indeed ,there are a number of innovations in the open movement that informed the other initiatives.  It was left to the market to adapt and puzzle together a suite of options to survive and thrive.  In my opinion, very few of the most hyped supposed failures failed, but created learning opportunities whereby we had added the competencies and lenses to decide where the path should lead rather than where that innovation was trying to take us.  The well-worn path is littered with many elegant ’solutions’ that never found an actual problem or ignored the adoption cycle, human behaviour, or the capital and talent/human resource needs for sustainability.  The road hasn’t been easy, even though it can look so in hindsight.  I’ve learned that two-steps-forward-one-step-back is a normal developmental experience.  At the time, the lessons of the past weren’t of much help as we tried to implement risky and challenging strategies in an ambiguous world.  Whole industries and organizations had last gasp years of great profitability because they didn’t incur the costs of investing in adaptation to technology and environmental changes to survive longer-term.  And then they collapsed.  I can point to several leading library publishers who chose not to adapt to change and were acquired for little more than the value of their archival content.  Back then, people would point to them and say our vision of a digital world platform was flawed or, worse, wrong.  Those sectors and organizations that made the investments early, eventually, did well . . . and strong vision and leadership of (CEO) champions and Board support was often the key to making it.  This applies not just to the for-profit sector, but equally to the academic and public libraries, the public benefit sector, with a few leading light institutions and cooperatives inventing (or testing) the future.  My observation is that leading through ambiguity is a major leadership competency.  The importance of “play” as a necessary first step in understanding an innovation, gets reinforced for me every time.

The Meat of this Paper: How has Testing of New Innovations Evolved?

My hypothesis is that we usually start with using the lens of the past to test new things and we often find the innovation or results wanting using that lens.  That’s OK, unless we don’t ask ourselves a follow-through question: “Is this innovation so transformational that we won’t understand its potential unless we test it differently?”.  Early-stage reviews are often engaging in criticism and defensive posturing or fear mongering, not critical thinking, or a view to new potential.  That’s an okay phase – unless you stop there and relax.  The tech sector learned from this market behaviour, and it led to many products staying in Beta for long periods of time – even after commercialization.  As we are seeing today, fear for the status quo can rise to the top (e.g., “Is ChatGPT Coming for My Job?” “ChatGPT ‘lies’ or hallucinates”).  I have a small collection of references to leading librarians who wrote why Google would never replace individual database, field, and Boolean searching.  Sound familiar?

Testing:

In the case of new technologies, a test is a procedure, or an evaluation method, used to measure performance using very specific criteria.  Tests can be used for a variety of purposes or practical assessments.  They may also be standardized or customized, depending on the purpose of the test and the service being tested.  For a test to be effective, it must be designed with clear objectives and criteria for evaluation and administered in a consistent and fair manner.  Results of a test can be used to inform decisions about adoption or contribute to learning and research. (Wikipedia)

Many readers of this piece will have experienced the same era of technological developments and change that I did.  They will recall the vocabulary, services, and products that I mention.  My start was in the early days of digital searching when we just tested for known item retrieval; we then moved on to searching fulltext.  Some people stopped there.  Our profession has evolved through many search engines, research ‘solutions’, through discovery, workflow systems on the enterprise intranet, and now we are exploring the difference between a search statement and an AI prompt.  I have, for decades, asserted that my colleagues are NOT merely information professionals, and that we are more valuable than that.  Working with us is a higher experience as we improve the quality of questions through our relationships, services, metadata efforts, and collections.  “Question Pros” is a bit awkward, but there it is.  Our core competency is open-ended questioning and relationship building through what we now call discussion prompts.  Sounds like real intelligence with emotion, not artificial.

For those that do not have the personal experience or history, I have appended a glossary of the terms that I’m using that you can scan or read now to assist you in the history and vocabulary changes of digital search and research innovations.  They’re also represented somewhat chronologically in the graphic that accompanies this opinion piece.

  • Boolean Search
  • WAIS: Wide Area Information Server
  • OPACs Online Public Access Catalogues and the rise of OCLC WorldCat
  • Online Search
  • Online databases – citation only
  • Online Databases – Fulltext with some citations
  • TCP/IP: The Internet Protocol Suite
  • Command Line Interface
  • FTP File Transfer Protocol
  • The Original Internet Search– Gopher, Archie, Jughead, and Veronica
  • GUI: Graphical User Interface
  • Yahoo! Directory Search
  • Google
  • Algorithmic Search
  • Federated Search / Metasearch
  • ChatGPT and its Clones
  • API: Application Programming Interface
  • Transformer
  • Large Language Models (LLM)
  • Bots and Chatbots
  • Devices, Speed, Storage, and Networks
  • Post Coordinate versus Pre-coordinate Indexing
  • Artificial Intelligence or AI
  • Machine Learning
  • Artificial General Intelligence or AGI
  • Generative
  • GPT: Generative Pre-Trained Transformer
  • Multimodal AI
  • Turing Test

The Big 10: Testing Advice

I’ll expand on this advice below, but I don’t want to bury the lede.  Here are the strategies and philosophies for testing that work for me.

  1. Keep an open mind. Seek learning not resistance. Notice vocabulary changes.
  2. Adjust your lens. Test the innovation for what it intends to do not what another solution did.
  3. Read and listen widely. Understand the goals of users beyond our librarianship bubble.
  4. Challenge Groupthink. Check yourself. Know your biases.
  5. Adjust your frames. Think beyond yourself as an audience.  Engage target users.
  6. Know where you (or your organization) prefer to sit on the adoption curve.
  7. Engage your own creativity but know the limits of reality and imagination.
  8. Understand the Gartner Hype Cycle and the adoption stage(s) things are at.
  9. Avoid assumptions of ‘product’ maturity, especially in technology developments. See the next generation – versions 2.0, 3.0, 3.2, etc.
  10. Become acquainted with the inputs and the context.

Testing the Generations of Retrieval to Search to Transformer Prompt: A Short (and Personal) History of Search

Every new automation innovation that is introduced drags the anchor of the past in its wake.  We start with automation that replicates the solutions of the past.  There is no more elegant example than the card catalogue.  We need ask ourselves: “Could librarians have invented Amazon.”  We certainly had the stuff – the backbone of Amazon book search is OCLC WorldCat.  I doubt we could have, since I believe changes arrives on the shoulders of outsiders at the fringes of our sector.

To my mind this is a basic vocabulary to review the history of digital search.

  • Boolean
  • WAIS: Wide Area Information Server
  • Online Search
  • Online databases – citation only
  • Online Databases – Fulltext with some citations
  • TCP/IP
  • Command Line Interface
  • The Original Internet – Gopher, Archie, Jughead, and Veronica
  • GUI: Graphical User Interface
  • Yahoo! Directory Search
  • Google and Algorithmic Search
  • Federated Search / Metasearch
  • ChatGPT and its clones
  • API: Application Programming Interface
  • Transformer
  • Large Language Models (LLM)
  • Bots and Chatbots
  • Devices
  • Post Coordinate versus Pre-coordinate Indexing
  • Artificial Intelligence or AI
  • Machine Learning
  • Artificial General Intelligence or AGI
  • GPT: Generative Pre-Trained Transformer
  • Multimodal AI
  • Turing Test

OPACs: Online Public Access Catalogues

In the beginning of my career as an information professional, we saw the introduction of the online public access catalogue.  We had computers in the backroom for years, but the addition of public access was the innovation.  Previously, we’d had computer output physical catalogue cards and/or COM (Computer Output Microforms – fiche, film, and ultrafiche) that I used as an undergrad in the 70’s.  All the strengths and weaknesses of physical catalogues were replicated in the first OPACs.  All that changed when we introduced dumb terminals to the users.  In hindsight, we introduced a librarian tool to end-users and bibliographic instruction (BI) was forever changed.  Sadly, most end-users don’t think like librarians are trained to think.  Pain and blame ensued, and these simple records – a site-based catalogue of mainly books – replicated the physical and organizational limitations of the card catalogue and the library/institution.  Author/Title retrieval was the norm and subject access put a giant magnifying glass on the inadequacy of subject headings for retrieval – especially the old ‘rule of three’ subject headings so they’d fit on 3 by 5 cardstock.  Some vestiges of this era continue in LCSH and Sears lists of subject headings.  Some decided that training was the answer while others worked on elegant digital program solutions to address the age-old problem of putting recipes under ‘Cookery’ or feminism under ‘Suffrage.’  Suffice to say, the inherent ‘ism’s (racism, sexism, Chauvinism, western centrism, etc.) in our categorizations is still being addressed. We also tried to deal with the history of changing subject headings over time to address the inherent bias of the system to prefer, for example, white, western, Judeo-Christian culture.  The good news is that we eventually discarded rules that were designed for the limits of a 3 by 5-inch card and stretched MARC to its limits and eventually successfully struggled with the goal of linked data, BibFrame, FRBR, and more.  However, in the beginning the OPAC sucked and the new lens we used from normal end-users challenged us all.

Testing:

  • The main test was known item retrieval. Seek an artifact’s location using known information and go physically find it.
  • Testing was based on the search results (usually chronological in the beginning with new variations emerging as options. The first improvement was reverse chronological, so the most recent citations were at the top of the list.  It seems obvious now but wasn’t at the time – nearly every online service defaulted to chronological.
  • What we learned from testing with end-users (and librarians) is that there are many, many points of error, including spelling, synonyms, low or different shared vocabulary, and just plain lack of understanding of how the artifacts were described and the record behaves.
  • Ultimately, we could search collections in other locations and then beyond with the development of the Z39.50 standard, but that was a while off in the beginning.

While some librarians and information pros were trying ‘fix-the-OPAC,’ a parallel development was taking place.

Online Databases (Citation Only)

Basically, the initial online databases were searchable citation indexes of mainly text-based sources.  This was wonderful.  Also, they had the same weaknesses as OPACs.  You had to search each database separately and you usually searched a few databases serially for a more complete bibliography of citations to records, cases, books, and articles.  At the same time, the people and organization databases started to arrive – associations, businesses, credit reports, securities documents, patents, who’s who, etc.  Some databases were kept current, others were batch updated annually like the print source, and even more were limited by how far their back files went (Index Medicus – the original print version of Medline, was not completely historically digitized, resulting in at least one death.).  The next step after searching was reviewing the results and choosing which photocopies to track down or which books to borrow.  (I remember the process – search-print-read-highlight-photocopy.)

Testing:

  • Can you retrieve citations to items that may contain information to satisfy the query?
  • What easy tricks can I use in Boolean searching to increase good retrieval from shallowly indexed records?
  • The tests were designed to review the results ‘list.’ Again, professional searchers slowly received advanced Boolean operators to control the order of the list and search results.
  • Can I get a simple and useful answer or list?
  • Pricing, at the time, was expensive and often based on time and results, so you did a lot of pre-work before searching. We proved our work by being faster and more accurate – then algorithms in Google hit from the fringes.

Online Databases (Fulltext + Citation Only)

I was very involved in the transition to fulltext databases.  The massive number of rights and permissions required to build these archival databases was a major negotiation, rights-owner education, and conversion effort.  It was a transformational era in the development of search and greatly reduced the time between search and delivery.  That said, the search interface still required a Boolean search statement in the beginning and required professional searcher skills for the most part.  Teaching Boolean only worked for very small segments of the mass market of users such as research and patent lawyers, chemistry pros, and clinical researchers.  Librarians held on to their search, find, and deliver roles for a while.  Eventually, there were user demands to add value to search services, which many librarians did.  Some, however, didn’t adapt quickly and remained behind in old-fashioned step-and-fetch-it library work.

Testing:

  • With the emergence of fulltext, the false-drop result greatly increased and the ability to narrow searches and target results increased. Can you use Boolean to narrow results, increase coverage, and reduce the useful item retrieval to false drop ratio?
  • Can I quickly a cheaply review the citation list and request the Fulltext? Does the fulltext meet my user’s needs when it is in older versions of ASCII, often in courier fonts, with no layout changes (i.e., bold, italics, headlines, etc.), fields visible, and no pictures, graphs, tables, columns, formulae, etc. to enhance readability.  In early times, every result was in Courier font.
  • Are your search skills demonstrably better than end-users?
  • Can you deliver value in terms of being faster (time savings), higher quality results, or less expensively (budget savings or client chargebacks), than an end-user?
  • Can I do this fast and cheap? (Pricing was usually based on time spent, number of search statements, and downloads.)
  • Does this fulltext service provide expanded Boolean operators that support codes, synonym generation, truncation, and near-term, narrower term, broader term, functionality?
  • This stage arrived about the same time as mass adoption of the desktop PC, so downloading (e-mail too) the results, artifacts, etc. was a blessing.

And then, of course, The Internet arrived.  Also, remember that the Internet arrived before the Worldwide Web.

The Original Internet Search

The Internet Directory Phase

The initial phase of the internet was very disaggregated.  That means that tens of thousands of databases were available on the Internet, but identifying where to search was the problem.  Enter FTP, Gopher, Archie, Veronica, Jughead, and WAIS.  The internet sources weren’t comprehensive in the least.  We used Internet protocols and tools to find databases on the network and supplemented with traditional sources.  This period lasted a while and remained a professional searcher’s domain due to its specialized skills and opacity.  It was primarily the domain of librarians, R&D, the military, and academics.  Again, identifying a place to search for stuff, required you to know how to find and to search in the discovered resources.

Testing:

  • Can I find a source server that has the information that I need to search?
  • Can I get that information as documents or records to my computer?
  • What is the coverage? What is missing?

Then, of course, the amazing Worldwide Web arrived within most humans’ living memories.  Of course, the WWW arrived with few browser standards, horrible connection speeds, and a Gold Rush mindset.

The Internet Adds the Worldwide Web (WWW)

Most of today’s workers didn’t live through the evolution that included the arrival of browsers based on standards, standards for content definition, interoperability, speed improvements, and more – including letting everyone in.  Very quickly people conflated the Internet with the WWW – which isn’t true.

The early web had hundreds of popular ‘search engines like Excite, OpenText, and Yahoo!.  It was an exciting time.  Yahoo! was the breakout player in that it provided a human-crafted directory of websites.  It used a taxonomy that collected these websites into categories and sometimes displayed these in its GeoCities sites.  Of course, the web grew faster than humans can keep up with as indexers, and, sadly, Yahoo! made the newbie error of using post-coordinate indexing and a too-small taxonomy which required constant review of subject categories so that they didn’t overwhelm results and users.  Their corporate mindset was distracted by strategies involving market or information portals.  During the same period there was also a Librarians Index to the Internet,” whose goal was to be more selective.

Testing:

  • Where are the sources online that I want to search for information? Are they online yet?
  • Can I retrieve websites that contain information that I want to visit and review?
  • How is printing enabled? With the adoption of HTML as the WWW standard, we finally got layout and rediscovered publishing. Tools like XML and PDF enhanced this innovation.  Also, we got visuals as well as extended ASCII and more languages with all of their diacritics and international voices.
  • How do I determine currentness?

What disrupted Yahoo!?  Besides its culture, and its inadequate vision of portalizing information, Yahoo! was trampled by algorithmic search engines like Google.

Google and Algorithmic Search

Although we saw hints of this innovation earlier, it was Google that brought it to the forefront of our consciousness.  At the time of writing Google is turning 25.  Rather than retrieving websites and sources by the indexing standards of the 20th Century, Google parlayed their insights into a successful business model.  By returning results based on the popularity of hypertext links, Google changed the game.  The PageRank® algorithm evolved beyond popularity to include commercial optimization (SEO), advertising, and a complicated series of advanced taxonomies, algorithms and filters that are updated often.  Tracking these is called the Google Dance.  Their innovation wasn’t just algorithmic search but being able to address currentness and speed in this new medium.  Their bots scraped faster changing websites – like news sites – very fast.  The value to price (free) ratio was right-on.  Their real innovation was capturing the advertising revenue.

Testing:

  • Is it fast? Does it get what I need in the first page of results?
  • Am I satisfied? Am I satisficed?  What is the difference between ‘good enough’ results and comprehensiveness and authoritativeness goals in the context of the end-user?
  • Whom does it serve? Me as a searcher or those paying for link placement? (Or both?)
  • Advertising and SEO were not fully there in the early years of Google. When they did arrive, we needed to review results for search engine optimization and sponsorship.  Ultimately, we needed to review the impact of bad actors.

The dominance and evolution of Google remained for almost the first two decades of this century.  Clouds were on the horizon for them, but a few things drew our attention away from text-based search – social media and the arrival of more formats on the web.

The Arrival of Images, Sound, and Video

Images started to arrive quickly on the web – since it is by and large a visual medium.  Technologies and standards that were ruggedized rapidly by the well-funded porn industry, we adapted to thumbnails and downloading images of art, posters, pictures, chemicals, and so much more.

Sound arrived as downloadable (which could take a while) and shareable with the MP3 standard and was popularized by Napster, et al.  Eventually, streaming allowed for the addition of video and sound to nearly everything and the creation of radio stations and services like Spotify.

Video arrived as a (very slow) download, but quickly became a streaming experience that challenged everyone’s bandwidth (Buffering . . .).  YouTube is now the second most popular search engine.  It is owned by Google and has more than 2.5 billion monthly users, who collectively watch more than one billion hours of videos each day.  Parallel developments in improved bandwidth created a key disruption for entire industries.

Speed was another innovation area that changed the industry.  Moving from the 110 baud of Telex, through 300, 1200, 2400 baud and then broadband and into Wi-Fi was transformational to the emergence of non-textual content.  Tied to progress in miniaturization and battery life, the world changed massively.

Vinyl, DVD, Blue Ray, and CD-ROM formats were put by the wayside by these services and other Internet-enabled services like Netflix and HBO, et al. and now inhabit a tiny market for a niche crew of aficionados.

Parallel to the changes in image, sound, and video, we see a huge uptick in ‘experiences’ on the web with online videogaming services becoming the largest publishing entities, by far, combining the games experience with social and collaborative features.

The real shifts that we experienced were that collections are mainly organizational assets for use and delivery, while the end-user disruptions caused a decline of the album into a song-based paradigm, the decline of journals into an article-based paradigm, books largely into an Amazonian behemoth but non-fiction often being at the chapter and paragraph level of retrieval, and movie theatres and film into a long tail discovery paradigm.

Testing:

  • Testing discovery here was complicated. It’s got features of the past and future, but the main innovation was the recommendation engine based on algorithm which were sometime behaviourally tuned.
  • Retrieval was easy based on simple metadata. If we wanted to retrieve, we could identify and retrieve.  That could fool folks who only tested search and not discovery.
  • Of course, that wasn’t the common modality of search for alternate media formats. It isn’t the record that’s important to the searcher.  They’re often searching for the experience of music, image, or film.  How does one test for ‘experience’ satisfaction?
  • The answer proved to be in the difference between retrieval/search and discovery. With the arrival of a wide variety of different formats, discovery services became popular.  What are my friends watching?  What popular right now?  More like this?  Recommendations?
  • It was exciting to watch this evolve for reading choices at Amazon, and it went big when Netflix and Napster, Spotify and their ilk arrived on the scene.
  • Applying these features to research information (dissertations, journals, preprints, articles, patents, etc.) is the goal of many start-ups in our sector.

The Social Media Spectrum

We can’t explore changes in search without talking about social media.  We always knew the concept of the ‘invisible college,’ and then – suddenly – it became visible to all as MySpace, Facebook, LinkedIn, Slack, and more grew and evolved.

Again, with emphasis, we are in an experience space characterized by everything represented by the human condition – the good and the bad.  At its best one can experience collegiality, teamwork, sharing, and respect.  At its worst one can experience bigotry, arguments, dogma, incivility, bullying, and more.  That said, most choose at least one, usually many, social media environments whether that’s LinkedIn, Facebook, Slack, TikTok, YouTube channels, Instagram, Twitter, Threads, or whatever.  Some of us use these as part of our work or we do research in these social spaces.  Either way, from the launch of MySpace through Facebook and LinkedIn and the other places that I’ve been known to go to occasionally like Reddit and Twitter, we can’t ignore the impact of these environments on our field and society.

Testing:

  • Is it a gainful experience? What is its bias and am I OK with that?
  • Can I now search opinion and viewpoint shifts in real time? The impact on political engagement is clear.
  • What is the experience of this space, and can I gain from it?
  • Is it too polluted to visit or can I manage my ‘friends’ well enough to curate a positive learning and engagement experience?

Commerce

Just a quick note here that throughout the development of Internet search and the web has been a strong trend towards commercial payments and revenue generation.  The explosion of retail on the web tied to credit/debit card payments and the invention of alternative payment systems like Square, PayPal, Venmo, loyalty programs, and cryptocurrencies required adaptation and review.  It brought a lot of money to the web in terms of equity and venture capital investments to move developments along faster.

Testing:

  • Is it safe? Is it secure? Are there sufficient protections?
  • Do I trust it/them/this site?
  • Who is collecting the data and what do they use it for?

Hardware

I’ll just add a tip of the hat to key hardware changes that developed over this period.  The main ones affecting the database and search space were chip design (ever increasing speed and specialized processors), storage (140K disks to petabyte drives), bandwidth speed (fibre, and satellites), gorilla glass touch screens, and improved battery life.  Each of these enabled larger databases, larger content better interfaces, faster results, quicker downloads, mobility, and streaming.  All of them materially changed in size, affordability, and functionality over many generations of version introductions.  As someone who started out on a dedicated desk-sized search station, acquired a Silent 700 dumb terminal, used a lot of pin-fed printers, and grew to the personal computing environment and plethora of devices we use today – every shift and innovation seemed like magic.  Nothing was more exciting that getting a new modem and faster speed.  That said, the results were larger search sources and the rising impatience.

Enterprise Innovation Arrives: Federated Search and Intranets

Directory search was an important phase and is echoed in the current developments in metasearch and federated search.  The ability to discover where to search and to transfer to a more powerful search engine, possibly using taxonomies, Boolean, tags, fields, and other powerful metadata, remains a poorly understood feature of this tool.

Testing:

  • Does this federated search tool allow me to find the best database to search of the thousands out there?
  • Does this federated search tool give me enough information to choose the few destination services that I want to explore further?
  • Do not evaluate metasearch for answers. That’s not what it does.  Its results are pathways to further exploration.
  • In the long term, we see the new Fediverse arriving in 2023.

Enterprise adoption of Intranets (using Internet and Web protocols and standards) differs only from the full web by who is permissioned into the Intranets and what permissions they may have for highly curated content.  With these environments the concept of identity and usage rights evolved.  Large institutions, like universities with multiple and different needs, resulted in exploration of federated identity management to serve access needs including the different needs of employees/co-workers, undergrads and grad students, faculty, administration, R&D labs, departments, associated institutions like teaching hospitals, and more.  Usage rights are a very complex problem.

Testing:

  • Is this Intranet portal organized in a way that the tools allow end-users to find what they need, save time, and align it with their learning, decision-making, and professional/organizational goals? Does it encourage quality and improved decision-making?
  • Can I learn this well enough to derive value from use? Am I overwhelmed?
  • Is it accessible from where I work beyond my desk, and on any device?
  • Can I meet all lawful requirements like licensing terms and copyright rules?

Artificial Intelligence and Machine Learning

Now I get to the meat of the changes that we’re currently experiencing.  Over the course of reading this piece, we’ve explored the testing differences that we lived through, and now you can see through the lens of recent history the changes in the search space from known item retrieval, general retrieval, directory search, algorithmic search and discovery, commercialization, and social media, and entertainment experiences.  Obviously, I only recognized many these shifts in retrospect, and I played with these innovations including many failed ones early just to learn.  I know that when you’re in the change dynamic, you experience confusion, ambiguity, learning, insights, and more.  I feel that I have a good track record of identifying the most impactful innovations quickly.  Personally, I find this process fun too.

One key insight that I offer is that when language changes, it is indicator of a major change.  Earlier, I noted that the language changed when we moved from ‘retrieval’ to ‘search.’  Roy Tennant and I have noted extensively in our keynotes that librarians love to search, but end-users want to ‘find.’  That was back in the 90’s era and used the lens of research and discovery.  I also note that we moved from BI (Bibliographic Instruction) to Information Literacy/Fluency, from search and retrieval to discovery, and, finally, prioritizing user experience (UX).  At this time, we must again adjust our lenses for testing AI enabled innovations.  We can recognize the necessary shift in the language that we’re seeing in the AI development spaces.

Here’s my take on the list of vocabulary changes inherent in the AI and Chatbots space:

  • Prompt (vs. search statement)
  • Response (vs. results and lists)
  • Machine Learning (vs. Database Schema)
  • Large Language Models (Search language vs. natural language)
  • Pre-Trained (probably wholly new but we might think of this activity transferring from human learning to machine learning)
  • Transformer (Wholly new. Generally, creating and writing were the purview of humans, now not so much)
  • Neural Network (The Internet was originally connected by wires ad servers into nodes. A neural network is a different organizational paradigm, closer to the human brain.)
  • Artificial Intelligence (AI) (sometimes call Artificial Narrow Intelligence)
  • Artificial General Intelligence (AGI) (Super-Intelligence?)
  • Artificial Emotional Intelligence (AEI) (Theoretical)

The Four Types of Intelligence

This is why there are differences with AI and AGI and why AGI is not measured by fully human standards.

“According to Psychologists, there are four types of Intelligence:

  • Intelligence Quotient (IQ)
  • Emotional Quotient (EQ)
  • Social Quotient (SQ)
  • Adversity Quotient (AQ)
  1. Intelligence Quotient (IQ): this is the measure of your level of comprehension.
    You need IQ to solve maths, memorize things, and recall lessons.
  2. Emotional Quotient (EQ): this is the measure of your ability to maintain peace with others, keep to time, be responsible, be honest, respect boundaries, be humble, genuine and considerate.
  3. Social Quotient (SQ): this is the measure of your ability to build a network of friends and maintain it over a long period of time.

People that have higher EQ and SQ tend to go further in life than those with a high IQ but low EQ and SQ. Most schools and training capitalize on improving IQ levels while EQ and SQ are played down.  Your EQ represents your Character, while your SQ represents your Charisma.

Now there is a 4th one, a new paradigm:

  1. The Adversity Quotient (AQ): The measure of your ability to go through a rough patch in life and come out of it without losing your mind. When faced with troubles, AQ determines who will give up, who will abandon their family, and who will decide to quit job and even goes into depression.

Parents and Leaders: please expose your children and employees to other areas of life than just Academics and pure Management.  Develop their IQ, as well as their EQ, SQ and AQ. They should become multifaceted human beings able to do things independently of their parents and leaders

According to psychologists, having a high Intelligence Quotient (IQ) alone is not enough to excel in life. Emotional Quotient (EQ), Social Quotient (SQ), and Adversity Quotient (AQ) are equally important in determining success.

This holds true for corporate employees as well, who need to have a well-rounded personality to excel in their professional lives.”

I believe that we need to start exploring and learning the meaning of this new vocabulary in the context of testing AI innovations like ChatGPT and its ilk.  AI Transformers require a very different kind of testing and evaluation.  That’s OK.  We have also seen that the machine learning moves quite fast!  In just 10 months since the release of ChatGPT 3.5, we’ve seen it learn and improve on its performance with respect to passing standard exams; the release of version 4.0 to be embedded in too many tools to count (from Turnitin, to the Microsoft Suite of applications, to Zoom); a ton of clones have been released; and OpenAI’s competitors release their responses (e.g., Google Bard, DuckDuckGo).  Any conclusion we might come to have today, is potentially and likely out-of-date quickly – sometimes in days or weeks.

Isabella Bedoya recommends using this anatomy for a ChatGPT prompt:

  1. Act or Instruction
  2. Context
  3. Task or Question
  4. Strengths or Limitations
  5. Additional Guidance

Note that the prompt guidance for ChatGPT can be longer than we are used to with search statements.  Framed well it can improve the response (not results list).  In older times, we knew that too-much-Boolean restricted search results.

Isabella Bedoya recommends these prompt starters.  Note that the initiator predicts the creation of prose and not a list of links:

  • Continue
  • Elaborate
  • Summarize
  • List
  • Compare and Contrast
  • Pro and Cons
  • In Simple/Layman’s terms
  • Clarify
  • Imagine
  • Act as
  • Step by Step
  • Brainstorm
  • Rephrase
  • Rank
  • Devil’s Advocate
  • Roleplay
  • Translate
  • Retrofit
  • Critique
  • Troubleshoot
  • Analogous

She curates a lot of prompts by target audience here:

AI Tools & Prompts

https://aitools.thefamehackers.com/

Interestingly, testing ChatGPT (and its clones) is probably just a small step when we view it as a distinct service.  Initially, we have an opportunity to evaluate AI out of the context of applications like Word or PowerPoint.  The next testing phase is easily predictable.  ChatGPT seems ready for prime time as a “Guide-on-the-Side” in applications as diverse as MS Word/PowerPoint/Excel, Zoom, and in SEO, coding, and narrower enterprise applications.  The integration of AI into all digital toolsets has arrived – and then we need to test from a lens of fit-for-purpose and workflows.  We have an allegory for this with the emergence in the past of integrated library systems (the ILS).  But, then again, that’s another history.

As for our information professional “information literacy” stock in trade, that’s where the real meat will be for us.  How do we train users on this innovation.  Surveys show that many, maybe most, users are already using AI tools.  Can we introduce more critical thinking into reviewing AI responses?  Can we teach better formulation of ‘prompts?”  Can we introduce it as a “Guide-on-the-Side” or “Co-pilots” to our users in their preferred digital toolset?  Can we teach evaluation methodologies to evaluate the “Guide”?  Can we teach critical thinking, bias, and ethical use skills for AI?  And can we help to solve the “sources and citations issue.”

Testing

  • You’re testing the value of how it responds to the prompt or question you give it. You’re not testing it using a formulated search statement and you should not be using Boolean.
  • Test your results on the written response (with a word limit that has increased but is still short). You’re reviewing the response and those responses can change – over time as the machine learning continues or its pre-training enlarges.  It can also change the response on request (ChatGPT uses “Regenerate response”).  I asked ChatGPT to explain the “Regenerate response” button:

“Sure! As an AI language model, I can generate responses to questions and prompts based on the input provided to me. Whenever you ask me a question or provide me with a prompt, I use my machine learning algorithms to analyze the input and generate a response that is intended to be relevant and helpful.

It’s important to note that the responses I generate are based solely on the information and data that has been input into my system. If you ask me to provide information that I do not have in my database, or if the input is vague or ambiguous, my responses may not be accurate or helpful.

However, if you feel that my response was not satisfactory or if you want me to provide a different response, you can always ask me to regenerate my response by asking the same or a similar question again, or by providing additional information or context to help me better understand your query.”

I used the “Regenerate response” button on above result:

“Sure, I’d be happy to explain. As an AI language model, I generate responses based on the input that I receive. Whenever you ask me a question or provide me with a prompt, I analyze it and use my pre-existing knowledge to come up with a response. This response is generated in real-time and is unique to your specific input.

However, since I am an AI language model and not a human, I do not have the ability to remember previous interactions or responses. So, if you ask me the same question multiple times, my response will likely be different each time, based on the specific context and information available to me at the time.

In other words, each time you interact with me, I will “regenerate” a response based on your input and the information that I have available to me. This means that you can ask me a question or provide me with a prompt as many times as you like, and I will generate a new response each time based on the information available to me at that moment.”

Testing:

  • Can you find the sources for the response? Can you get citations?
  • Can we look for the differences between one AI engine and another. Hint: Compare ChatGPT and Bing – they’re not the same. Compare Bing Chat and Google Bard.  Or for real fun, Compare ChatGPT, DuckDuckGo Chat, and FreedomGPT (which has no guardrails based on ethics, disinformation, or anti-racism concerns).

The FUTURE?

Artificial General Intelligence or AGI

Artificial General Intelligence (AGI) refers to the hypothetical ability of an artificial intelligence system to understand or learn any intellectual task that a human can. It is sometimes also referred to as strong AI, full AI, or human-level AI.

AGI systems are capable of thinking and reasoning abstractly, acquiring knowledge and skills through experience, adapting to new situations, and solving a wide range of problems in various domains. They can also understand natural language, learn from experience, and transfer knowledge from one domain to another.

While most existing AI systems are designed to perform specific tasks or solve particular problems, AGI systems would be able to perform a wide range of tasks, learn new ones, and apply knowledge across multiple domains. Achieving AGI is a major goal of artificial intelligence research, but it remains a challenging and elusive goal, with many technical and ethical hurdles to overcome.

Some experts believe that achieving AGI could lead to a technological singularity, a hypothetical point in time when AI surpasses human intelligence, leading to exponential advances in technology and potentially transforming human society in unpredictable ways. [ChatGPT]

AGI supposes a time when AI approaches or surpasses human level intelligence.  Evaluate this will be a challenge.  What it mean to be human can take on many facets – from sociological, biological, emotional, creative, and more including the ability to share, make jokes, and so much more.  It probably true that the Turing Test, with the idea that conversation would be indistinguishable between humans and computers, is approaching the end of its usefulness.  This might be an evolutionary event like when computers winning at grandmaster chess or Jeopardy champion levels of achievement passed.

I propose that we need to understand what ‘general’ intelligence is, what comprises memory, and what is ‘learning’ at its core and beyond.  When does AI start to resemble brain level processing (and not a computing paradigm)?  And does it matter?  Science fiction is full of imaginative stories where technology has crossed the boundary between general intelligence and emotional intelligence and ambition.  Where do emotional and general intelligence intersect – art, insight, management, creation, music, humour, culture, or whatever?  Are we ready for that debate of where the DMZ should be for AI?

Testing: So, if we start to approach AGI level results, how would we test it.  Philosophers have been debating what it means to be human for centuries.  Maybe philosopher is one of the few disciplines that isn’t as much at risk from ChatGPT and AI.

Artificial Emotional Intelligence (AEI)

My own addition to this AI space is that a next step in AI development would be after AGI appears.  This would be the potential emergence of Artificial Emotional Intelligence (AEI) where we could see the creation of digitally emotional beings.  While theoretical, it is fun for me to consider what comprises a fully sentient human being – and what might approach that in a digital being.  I’d start with stating that digital beings do no need to look like human or robots.  Consider the experience of another human over the telephone or video-call.  At what point would be not be able to tell the difference?  Consider the role of relationship memory, humour, creativity, and caring, etc. . . .

My Advice for Artificial Times

  1. Keep an open mind.

This is harder than it looks. Having too open a mind is likely a terrible idea too.  Our guardrails include integrity, ethics, morals, and more.  Both need to be engaged in evaluation.  My perspectives and advice are that we need to engage our open-mindedness and critical thinking (not criticism) skills in tandem at this stage in the generative AI developments.  Creative thinking will be helpful.

  1. Seek learning not resistance.

I learned long ago that straight resistance and criticism were learning barriers.  One must reframe resistance as questions, renewal, playfulness, and, indeed, enlightenment.

  1. Adjust your lens.

Test the innovation for what it intends to do.  Adjust your frames.  Think beyond yourself as an audience.

Knowing the lens that we use and occasionally challenging that lens is healthy.  We often kick in our librarian lens (a strength in our toolkit), but sometimes we lose the thread when we encounter some transformational technology changes for end-users.  AI is one of these shifts.  We must test that it meets its promised intent – and testing it like just another search engine.

  1. Read and listen widely.

Understand the goals of users beyond librarianship bubble.

AI’s intent is to transform the flows for society.  In our sector, this flow includes, among many, research-flow, workflow, publishing flows, and learning flows.  Our professional boundaries are being challenged to be more permeable.

  1. Challenge Groupthink. Check yourself. Know your biases.

Everyone has biases – both personal and professional.  This isn’t necessarily a bad thing.  It is a matter of awareness and being aware of your biases generates a second breath of review.  In my opinion, our profession suffers from two common dysfunctions.  One is groupthink and the other is risk/conflict avoidance.

Definitions of Groupthink:

  1. “The practice of thinking or making decisions as a group in a way that discourages creativity or individual responsibility” [Dictionary.com]
  2. “A pattern of thought characterized by self-deception, forced manufacture of consent, and conformity to group values and ethics.” [Merriam Webster]
  3. “Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Cohesiveness, or the desire for cohesiveness, in a group may produce a tendency among its members to agree at all costs. This causes the group to minimize conflict and reach a consensus decision without critical evaluation.” [Wikipedia]

“Social psychologist Irving L. Janis identified the following characteristics or symptoms of groupthink behavior:

  1. Direct pressure: Groupthink divides groups into two camps: the in-group and the out-group. The in-group agrees with a decision, while the out-group raises questions or disagrees. The in-group can pressure the out-group to conform to groupthink or risk members viewing them as dissenters or disloyal.
  2. The illusion of invulnerability: Lack of questioning or alternate opinions makes in-group team members feel overconfident, leading to greater risk-taking when making decisions.
  3. The illusion of unanimity: Group members view the lack of questions regarding their decisions as a sign that everyone in the group agrees with them. The sense of a unified front makes it harder for others to present a dissenting opinion.
  4. Mindguards: Individual members act as self-appointed gatekeepers, shielding the group leader and other members from different opinions. They keep out any outside influence that might negatively impact group identity.
  5. Rationalizing: Groupthink encourages group members to dismiss any outside information, especially warnings or criticisms. Paying attention to this information might make them think deeper about or reconsider their opinions.
  6. Self-censorship: Victims of groupthink will repress any ideas or opinions that put them at odds with the group. They may even come to doubt their thoughts and beliefs.
  7. Stereotyping: In-group members may argue with and verbally abuse out-group members for their dissenting opinions. Negative biases, which paint them as ignorant, weak-willed, or morally corrupt, may also be part of stereotyping.
  8. Unquestioned belief: Illusions of invulnerability, combined with the in-group’s unwavering belief in their own moral and ethical correctness, can lead to defective decision-making. It also causes group members to disregard any consequences for their actions.”
  9. Risk/conflict avoidance and its sister critical thinking disguised as criticism.
  10. Know the adoption curve.

Where you and/or your organization prefer to be.

The wisdom of personas and differences in adoption behaviours continues to be born out.  Different people perceive innovation differently at different stages of the product cycle.

  1. Engage your own creativity but know the limits of reality and imagination.

You can learn as an individual and get oriented.  At first, play, playfulness, and experimentation while withholding judgment are good strategies.  There is a reason why product managers do alpha and beta tests, engage surveys, and focus groups, and do sandbox trials.  Humans come with a wide variety of skills, insights, and perspectives that assist in the growth of products and ideas.

  1. Understand the Gartner Hype Cycle and the stage(s) the tech rests.

Throughout my career I’ve seen new innovations evaluated by the lens of, for example, a mature product, when its only at its earliest stage.  This is the equivalent of throwing out the baby because it isn’t a good accountant.  Often, the innovation is not a ‘product’ at all but a mere feature or function of a larger trend.

  1. Avoid assumptions of maturity.

Let’s keep in mind that Generative AI is VERY Young – a newborn and not a toddler yet.  It is learning at an unbelievable speed.  We don’t know when it will (or if it will) achieve human levels of cognition and communication.  We do know that it won’t be the same next month or next year.  We know that we can safely predict that there will be breaks in the road, tragedies, and successes.

  1. Never frame our reaction to these conversations as direct challenges or objections.

Use the tried-and-true practices of sales professionals to respond to sales objections.  Re-frame the challenges or objections – no matter how they are spoken – as requests for understanding, more information, and follow-through while maintaining respect for other points of view.  Avoid defensiveness at all costs.

  1. Take the conversation to a higher level.

Let’s see opportunities and cautions.  Let’s enjoy the ride and be part of the process.

I’ll leave this exposition at this point.  I hope you found it engaging and thought-provoking.

I continue to learn.

Appendix: Glossary

I am attempting to put this in a rough learning and historical order rather than alphabetical.

  • Boolean Search
  • WAIS: Wide Area Information Server
  • OPACs Online Public Access Catalogues and the rise of OCLC WorldCat
  • Online Search
  • Online databases – citation only
  • Online Databases – Fulltext with some citations
  • TCP/IP: The Internet Protocol Suite
  • Command Line Interface
  • FTP File Transfer Protocol
  • The Original Internet Search– Gopher, Archie, Jughead, and Veronica
  • GUI: Graphical User Interface
  • Yahoo! Directory Search
  • Google
  • Algorithmic Search
  • Federated Search / Metasearch
  • ChatGPT and its Clones
  • API: Application Programming Interface
  • Transformer
  • Large Language Models (LLM)
  • Bots and Chatbots
  • Devices, Speed, Storage, and Networks
  • Post Coordinate versus Pre-coordinate Indexing
  • Artificial Intelligence or AI
  • Machine Learning
  • Artificial General Intelligence or AGI
  • Generative
  • GPT: Generative Pre-Trained Transformer
  • Multimodal AI
  • Turing Test

Boolean Search

A Boolean search, in the context of a search engine, is a type of search where you can use special words or symbols to limit, widen, or define your search.  At its simplest, this is possible through Boolean operators such as AND, OR, NOT, and NEAR, as well as the symbols + (add) and (subtract).  However, there are many other permutations of Boolean operators that many librarians became expert at searching with.  When you include an operator in a Boolean search, you’re either introducing flexibility to get a wider range of results, or you’re defining limitations to reduce the number of unrelated results. [https://www.lifewire.com/what-does-boolean-search-3481475]

WAIS: Wide Area Information Server

 WAIS (pronounced “ways”) was a client-server database search system introduced in 1990 before the widespread adoption of the World Wide Web. It indexed the contents of databases located on multiple servers and made them searchable over networks, including the Internet.  WAIS databases contained mostly text-based documents, (although WAIS documents may contain sound, pictures, or video as well). WAIS databases are referred to as sources.  Text searches of the databases were ranked by relevance, where files with more keyword hits are listed first.  As the web grew in popularity, WAIS usage declined and eventually faded into obsolescence. The service has now been completely replaced by modern search engines.

OPACs Online Public Access Catalogues and the rise of OCLC WorldCat

Libraries were early pioneers of database development for human-created records, standardized fields using MARC, which eventually led to OCLC and WorldCat.

“Although a handful of experimental systems existed as early as the 1960s, the first large-scale online catalogs were developed at Ohio State University in 1975 and the Dallas Public Library in 1978.  These and other early online catalog systems tended to closely reflect the card catalogs that they were intended to replace. Using a dedicated terminal or telnet client, users could search a handful of pre-coordinate indexes and browse the resulting display in much the same way they had previously navigated the card catalog.

Throughout the 1980s, the number and sophistication of online catalogs grew. The first commercial systems appeared and would by the end of the decade largely replace systems built by libraries themselves. Library catalogs began providing improved search mechanisms, including Boolean and keyword searching, as well as ancillary functions, such as the ability to place holds on items that had been checked out.  At the same time, libraries began to develop applications to automate the purchase, cataloging, and circulation of books and other library materials. These applications, collectively known as an integrated library system (ILS) or library management system, included an online catalog as the public interface to the system’s inventory. Most library catalogs are closely tied to their underlying ILS system.” [Wikipedia]

Online Search

Online search is the process of interactively searching for and retrieving requested information via a computer from databases that are online.  The original online searching was driven by professional searchers using a Boolean toolkit and command line prompts (often with a little UNIX thrown in). Interactive searches became possible in the 1980s with the advent of faster databases and smart terminals.  In contrast, computerized batch searching was prevalent in the 1960s and 1970s.  Today, searches through web search engines constitute the vast majority of online searches.  Online searches often supplement reference and research transactions.

Online databases – citation only

My experience with online databases started with batch searches (using and moved on to real-time searching (although at 110 and 300 baud) using SDC Orbit, Lochhead Dialog, and Infomart.  The databases were largely citation only (ERIC, USGPO, AGRICOLA, etc.) or fact-based like directory databases of directories (Who’s Who, telephone books, etc.).  The skill here was that near exact matches were needed to retrieve good ‘answers’ to your queries.

Online Databases – Fulltext with some citations

In the beginning, the major quite of databases were citation only (basically replicating the MARC Innovation (that was one of the top TIME Top 100 innovations of the Millennium).  It is not hard to diagnose the weaknesses of this format, but it was a huge step forward.  Some databases hired people to write abstracts or included the abstracts / headnotes from the sources (My favourite was ABI/Inform).  We learned to ‘field search to improve search retrieval recall and accuracy.  My experience with fulltext began with beta versions of Info Globe, the Fulltext of the Globe &Mail newspaper that pioneered fulltext commercial systems with QL Systems.

TCP/IP: The Internet Protocol Suite

“The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.” [Wikipedia]

Command Line Interface

Command Line searching preceded the search ‘box.’  “A command-line interpreter or command-line processor uses a command-line interface (CLI) to receive commands from a user in the form of lines of text. This provides a means of setting parameters for the environment, invoking executables and providing information to them as to what actions they are to perform. In some cases. the invocation is conditional based on conditions established by the user or previous executables. Such access was first provided by computer terminals starting in the mid-1960s. This provided an interactive environment not available with punched cards or other input methods.” [Wikipedia]

FTP File Transfer Protocol

FTP is the way we transfer a file from one computer or system to another, especially on the internet.  At one point in history, this was the main point of the Internet before there was a world-wide-web protocol.  “The File Transfer Protocol (FTP) is a standard communication protocol used for the transfer of computer files from a server to a client on a computer network. FTP is built on a client–server model architecture using separate control and data connections between the client and the server.  FTP users may authenticate themselves with a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).” [Wikipedia]

The Original Internet Search– Gopher, Archie, Jughead, and Veronica

“Archie, Gopher, Veronica and Jughead are three standard “finding” tools on the Internet. The Archie database is made up of the file directories from hundreds of systems. When you search this database based on a file’s name, Archie can tell you which directory paths on which systems hold a copy of the file you want.”  What is the difference between FTP and Gopher?  “Like FTP, Gopher and WWW allow the information provider to serve a file system directory tree.  Unlike FTP, which can only access information on one server at a time, a Gopher menu or WWW hypertext document can point to any file or directory located on any FTP or Gopher or WWW server.”

“Gopher: An easy-to-use file retrieval program, based on hierarchical, distributed menus. See also Veronica. FTP File Transfer Protocol, a protocol for copying files to and from remote machines. Archie. A database of locations for all files that are publicly available through FTP.”  A Gopher is a menu system that simplifies locating and using Internet resources. Each Gopher menu at each Gopher site is unique. Gopher menus usually include the other familiar features of the Internet.  You can use a Gopher to Telnet to a location or to FTP a file or to do just about anything else–if that option is listed on the Gopher menu.  Gopher software makes it possible for the system administrator at any Internet site to prepare a customized menu of files, features and Internet resources. When you use the Gopher, all you must do is select the item you want from the menu.

“Veronica was a search engine used to locate documents accessible over the Internet using the Gopher communication protocol. The Gopher protocol was an alternative to the World Wide Web that was popular in the 1990s, and Veronica was the primary search system for the Gopher protocol.”  The Veronica database is a collection of menus from most Gopher sites. When you do a Veronica search, you are searching menu items. During the search, Veronica builds an on-the-spot menu consisting of just those items that match your request. When the search is finished, Veronica will present you with a customized Gopher menu.  The Veronica database of all Gopher menu items is called Gopherspace. Thus, if you used Veronica to search Gopherspace for the word supreme, you would most likely come up with a Gopher-style menu listing the places to get U.S. Supreme Court decisions. At this point, you could simply choose an item, and Veronica would automatically take you there.

To use Archie, you must Telnet to an Archie server. You can do that by keying in a command such as telnet://archie.internic.net to get to the Archie server at that address and log on by keying in archie when prompted to do so. Once you do your Archie search, you must then go get the file using FTP, the Internet File Transfer Protocol.

Jughead is available at some Gopher sites and uses the menu items on a single Gopher menu as its database.”

GUI: Graphical User Interface

“The GUI (/ˌdʒiːjuːˈaɪ/ JEE-yoo-EYE or /ˈɡuːi/ GOO-ee), graphical user interface, is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicator such as primary notation, instead of text-based UIs, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of CLIs (command-line interfaces), which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office, and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where HUD (head-up display) is preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.” [Wikipedia]

Yahoo! Directory Search

“In January 1994, Yang and Filo were electrical engineering graduate students at Stanford University, when they created a website named “Jerry and David’s guide to the World Wide Web”.  The site was a human-edited web directory, organized in a hierarchy, as opposed to a searchable index of pages. In March 1994, “Jerry and David’s Guide to the World Wide Web” was renamed “Yahoo!” and became known as the Yahoo Directory.  The “yahoo.com” domain was registered on January 18, 1995.   The word “yahoo” is a backronym for “Yet Another Hierarchically Organized Oracle” or “Yet Another Hierarchical Officious Oracle”.  The term “hierarchical” described how the Yahoo database was arranged in layers of subcategories.” [Wikipedia] “By 1998, Yahoo was the most popular starting point for web users, and the human-edited Yahoo Directory the most popular search engine, receiving 95 million page views per day, triple that of rival Excite.”  [Wikipedia]

Although Yahoo! evolved and developed or acquired several different applications and websites, it started as a human-edited directory search engine and not a fulltext or web search engine.

Algorithmic Search

Algorithmic search refers to the process of finding a solution or an answer to a problem using an algorithm. An algorithm is a set of instructions that are executed in a specific order to solve a particular problem.

There are different types of algorithmic searches, including:

  1. Brute-force search: In this method, the algorithm tries every possible solution until it finds the correct one. This approach can be time-consuming and may not always find the optimal solution.
  2. Heuristic search: This method uses problem-specific knowledge to guide the search process and quickly narrow down the possible solutions. This approach can be more efficient than brute-force search but may not always find the optimal solution.
  3. Genetic algorithms: This method uses a population of possible solutions that are evaluated and evolved over time to find the best solution. This approach is often used in optimization problems where there are many possible solutions.
  4. Depth-first search: In this method, the algorithm explores a single path as deeply as possible before backtracking and exploring other paths. This approach is often used in tree or graph traversal problems.
  5. Breadth-first search: This method explores all possible paths of the same depth before moving on to deeper paths. This approach is also often used in tree or graph traversal problems.

Overall, algorithmic search is a fundamental concept in computer science and is used in a wide range of applications, from search engines and recommendation systems to robotics and artificial intelligence. [ChatGPT]

Google

“Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in California.” “While conventional search engines ranked results by counting how many times the search terms appeared on the page, they theorized about a better system that analyzed the relationships among websites. They called this algorithm PageRank; it determined a website’s relevance by the number of pages, and the importance of those pages that linked back to the original site. Page told his ideas to Hassan, who began writing the code to implement Page’s ideas.” [Wikipedia]

Federated Search / Metasearch

Federated search enables a user to search several different data sources at once by making a single query.  The federator gathers results from one or more search engines and then presents all the results in a single user interface.  Federated search enables a user to search several different data sources at once by making a single query.

ChatGPT and its clones

I asked ChatGPT to compare ChatGPT, Wolfram Alpha and Google Search.  Its (Their?) answer:

“ChatGPT, Wolfram Alpha, and Google Search are all different types of artificial intelligence-based tools that serve different purposes. Here are some of the key differences and similarities between the three:

  1. Purpose and Functionality:
  • ChatGPT: ChatGPT is a language model that can converse with users in natural language, answer questions, and perform tasks such as generating text, translations, and summarizations.
  • Wolfram Alpha: Wolfram Alpha is a computational knowledge engine that computes answers to factual questions by analyzing data from a wide range of sources, such as encyclopedias, databases, and scientific journals. It can also perform complex mathematical calculations, plot graphs, and generate reports.
  • Google Search: Google Search is a search engine that indexes and ranks web pages based on their relevance to a user’s search query. It provides a list of search results that may contain answers to the user’s questions.
  1. Data sources:
  • ChatGPT: ChatGPT is trained on a large corpus of text data from the internet and other sources.
  • Wolfram Alpha: Wolfram Alpha sources data from curated databases, such as the Wolfram Knowledgebase, and from external sources, such as government databases, scientific publications, and social media.
  • Google Search: Google Search crawls the web and indexes billions of web pages.
  1. User Interface:
  • ChatGPT: ChatGPT has a conversational interface that enables users to interact with it in natural language.
  • Wolfram Alpha: Wolfram Alpha has a web interface that allows users to input queries and receive results in various formats, such as tables, graphs, and images.
  • Google Search: Google Search has a web interface that displays search results in a list format, with links to relevant web pages.
  1. Accuracy:
  • ChatGPT: ChatGPT’s accuracy depends on the quality of its training data and its ability to understand and interpret user input. It can sometimes generate incorrect or irrelevant responses.
  • Wolfram Alpha: Wolfram Alpha’s accuracy depends on the quality and reliability of its data sources. It is designed to provide precise and accurate answers to factual questions.
  • Google Search: Google Search’s accuracy depends on the relevance and authority of the indexed web pages. It can sometimes return irrelevant or misleading results.

In summary, ChatGPT, Wolfram Alpha, and Google Search are different types of AI-based tools that serve different purposes and have different strengths and weaknesses. ChatGPT is best for conversational tasks, while Wolfram Alpha is best for factual calculations and queries, and Google Search is best for finding web pages that contain relevant information.”

API: Application Programming Interface

API stands for Application Programming Interface. In the context of APIs, the word Application refers to any software with a distinct function. Interface can be thought of as a contract of service between two applications – a set of functions and procedures allowing the creation of applications that access the features or data of an operating system, application, or other service.

Transformer

“ChatGPT is a large language model developed by OpenAI, based on the GPT (Generative Pre-trained Transformer) architecture. Like other GPT models, ChatGPT is a neural network that has been pre-trained on a massive amount of text data to generate human-like responses to natural language queries.

The architecture of ChatGPT is based on the transformer model, which was introduced in a 2017 paper by Vaswani et al. The transformer architecture is known for its ability to process sequential data, such as text, and model long-range dependencies between words. This makes it well-suited for language modeling tasks, such as generating coherent and contextually appropriate responses to natural language queries.

ChatGPT is trained on a diverse range of text sources, including books, articles, and web pages, to ensure that it has a broad knowledge base. It has a large number of parameters, which allows it to model complex relationships between words and generate high-quality responses.

ChatGPT has been used in a variety of applications, including chatbots, customer service agents, and language translation systems. It has demonstrated state-of-the-art performance in a range of natural language processing tasks, and its capabilities continue to evolve as it is trained on ever larger and more diverse datasets.” (ChatGPT)

Large Language Models (LLM)

“A large language model is a type of machine learning model that is capable of processing and generating human-like language. These models are usually based on neural networks and are trained on large datasets of text, such as books, articles, and web pages.

Large language models use techniques such as unsupervised learning to learn patterns in language data and create internal representations of words and sentences. These models can then be used for a variety of natural language processing tasks, such as language translation, text summarization, and question answering.

One of the most well-known examples of a large language model is GPT (Generative Pre-trained Transformer), which was introduced by OpenAI in 2018. GPT uses a transformer architecture and is pre-trained on large datasets of text to generate human-like responses to natural language queries.

Large language models are becoming increasingly important in the field of natural language processing, as they enable machines to understand and generate human-like language in a way that was previously impossible. They have a wide range of applications, from chatbots and virtual assistants to language translation and content generation.” (ChatGPT)

Bots and Chatbots

A bot, short for robot, is a software program that is designed to automate certain tasks on a computer or on the internet. Bots can be programmed to perform a variety of functions, ranging from simple tasks like searching for information on the web to complex ones like playing games or carrying out financial transactions.

There are many types of bots, including chatbots, which are designed to simulate human conversation and assist users in performing tasks, and web crawlers, which are used to collect data from the internet. Bots can be programmed to operate autonomously or to interact with users and respond to their input.

Some bots are designed to be helpful and perform useful functions, while others are malicious and can be used to carry out harmful actions, such as spreading malware, stealing personal information, or perpetrating fraud. Therefore, it is important to be aware of the potential risks associated with bots and to use them judiciously.

Overall, bots can be very useful in automating repetitive or time-consuming tasks, but it is important to use them responsibly and be aware of their potential impact on security and privacy.  A bot is a software program that is designed to automate certain tasks on a computer or on the internet, while a chatbot is a specific type of bot that is designed to simulate human conversation and assist users in performing tasks or getting information.  The main difference between a bot and a chatbot is that chatbots are programmed to communicate with users in natural language, using text or speech, while other types of bots may not interact with users directly.

Chatbots are typically used to provide customer service, answer questions, or help users complete tasks, such as booking a reservation or making a purchase. They are often integrated into messaging platforms, such as Facebook Messenger or WhatsApp, or deployed on websites as a chat widget.

While chatbots can be programmed to be very sophisticated and provide personalized assistance to users, they are still limited by their programming and may not be able to understand complex queries or respond in a way that fully replicates human conversation. However, advances in natural language processing and machine learning are making chatbots increasingly capable of understanding and responding to a wide range of user queries and needs.” (ChatGPT)

Devices, Speed, Storage, and Networks

The history of searching was limited in its early days by a few hardware restrictions that needed to evolve or be overcome over time.

Computers: Second, many of us lived through the transition from mainframes to minicomputers to personal computers to handheld devices.  Combined with slow and then speedier online access transitions we also worked through (and have drawers of old tech to prove it, dedicated vendor-specific terminals to luggable so-called dumb terminals (as differentiated from PCs).  In the early 1980’s the personal computer came on the scene with two-tone screens (shades of gray, or amber or green text) integrated with a processing chassis.  Ultimately, we got the PC with the monitor, printers, and scanner as peripherals.  Colour monitors eventually arrived with the GUI.  Communication advances ran in parallel where phones were merely phones to start, but the innovation was that they were mobile and small enough to fit in a pocket. The laptop and then the mobile device then went through a period of enormous innovation such they were much more than just a computer or telephone, and affected every kind of communication, transactions, information format, and entertainment.

Speed: First, old timers, such as I, like to regale others (and commiserate with our peers), with memories of device limitations.  One was speed as we watched painfully slow results on screen (or worse – delivered days later).  We may have started at speeds of 110 baud, quicky transitioning to 300 baud modems and 1200 or 2400 Baud modems and then on to the wonders of many iterations of broadband.  We transitioned from dial-up through the phone to wired access and then Wi-Fi.  We worked through standard telephone and coaxial cable wires to glass fibre and satellite.

Storage: Third, the transition to the Cloud went through many evolutionary steps.  We stored ‘stuff’ on everything – paper, reel to reel tape, floppy disks, microforms, computer tapes, mag tapes, video disks, audio cassettes, CD-ROM, DVD, thumb-drives, and more – and then transitioned it to new formats.  All of these formats eventually reached their capacity limits and the transition to shared space in the cloud arrived.  At first, digital conversions were just black and white and limited by old ASCII formats that limited the ability to convert and use mathematical formulae and non-Arabic languages.  Digital sound was a dream, bandwidth severely limited video downloads, and colour sometimes remained in the real world.  All these issues were addressed within the time of a single generation, but many older conversions remain artifacts of their age.

Networks: Lastly, we worked through adopting networked computing, initially at the physical single workplace level (office automation), then wide-area networks for multiple offices in the enterprise.  Then enterprise dial-up from home or worksite.  What quickly followed was an explosion as networks became the Internet nodes and the web enabled and access to a growing world of too-much-information arrived.  This innovation resulted in the network effect so well described by Gordon Moore and, arguably, changed human behaviour and society on a mass scale – some called it a revolution.  It fostered an information highway metaphor which was discarded as we experienced a three-dimensional information ocean.

Post Coordinate versus Pre-coordinate Indexing

Pre-coordinate indexing systems are conventional systems mostly found in printed indexes. In this type of system, a document is represented in the index by a heading or headings comprising of a chain or string of terms. These terms taken together are expected to define the subject content of the document.

Post Coordinate (also called post-combination) indexing, n. A method of indexing materials that creates separate entries for each concept in an item, allowing the item to be retrieved using any combination of those concepts in any order.

What is the difference between pre and post coordinate indexing?

Pre-coordination is the combining of elements into one heading in anticipation of a search on that compound heading. Post-coordination is the combining of headings or keywords by a searcher at the time he/she looks for materials in a catalog.  Derived indexing uses the same language as that used by author. It is also known as natural language indexing words /terms used by the author in the text are used to provide access to users. Assigned Indexing is based on conceptual analysis of terms and words.

Artificial Intelligence or AI (Sometimes called Artificial Narrow Intelligence)

Artificial Intelligence (AI) is a field of computer science and engineering that aims to create intelligent machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing. AI involves the development of algorithms and models that enable machines to learn from data, recognize patterns, and make decisions or predictions based on that data.

There are several approaches to AI, including rule-based systems, symbolic AI, neural networks, and deep learning. Rule-based systems use if-then rules to make decisions, while symbolic AI represents knowledge as symbols and manipulates them using logic. Neural networks are inspired by the structure of the human brain and use interconnected layers of nodes to learn from data. Deep learning is a type of neural network that uses multiple layers of nodes to learn and represent complex patterns in data.

AI has numerous applications, including image and speech recognition, natural language processing, robotics, autonomous vehicles, healthcare, finance, and many others. AI has the potential to revolutionize the way we live and work, but it also raises ethical and societal concerns, such as privacy, bias, and job displacement.” (ChatGPT)

Machine Learning

Machine learning is a branch of artificial intelligence that enables machines to learn from data and improve their performance on a specific task without being explicitly programmed. It involves the use of statistical techniques and algorithms to analyze and learn from patterns in large amounts of data, in order to make predictions or decisions about new, unseen data.

Machine learning algorithms can be supervised, unsupervised, or semi-supervised. In supervised learning, the algorithm is trained on labeled data, meaning the input data is accompanied by an output variable. The algorithm learns to predict the output variable based on the input data. In unsupervised learning, the algorithm is trained on unlabeled data, meaning there is no output variable. The algorithm learns to find patterns and structure in the data on its own. In semi-supervised learning, the algorithm is trained on a combination of labeled and unlabeled data.

Machine learning has many applications, including natural language processing, image recognition, fraud detection, personalized marketing, and recommendation systems.” (ChatGPT)

Artificial General Intelligence or AGI

Artificial General Intelligence (AGI) refers to the hypothetical ability of an artificial intelligence system to understand or learn any intellectual task that a human can. It is sometimes also referred to as strong AI, full AI, or human-level AI.

AGI systems are capable of thinking and reasoning abstractly, acquiring knowledge and skills through experience, adapting to new situations, and solving a wide range of problems in various domains. They can also understand natural language, learn from experience, and transfer knowledge from one domain to another.

While most existing AI systems are designed to perform specific tasks or solve particular problems, AGI systems would be able to perform a wide range of tasks, learn new ones, and apply knowledge across multiple domains. Achieving AGI is a major goal of artificial intelligence research, but it remains a challenging and elusive goal, with many technical and ethical hurdles to overcome.

Some experts believe that achieving AGI could lead to a technological singularity, a hypothetical point in time when AI surpasses human intelligence, leading to exponential advances in technology and potentially transforming human society in unpredictable ways.” (ChatGPT)

Generative

Generative refers to a type of model or algorithm that can create new data that is similar to, but not an exact copy of, existing data. Generative models are designed to learn the statistical patterns and structures underlying a dataset and use that knowledge to generate new data points that share similar characteristics with the original data.

Generative models can be used for various tasks, such as image or speech synthesis, text generation, or data augmentation. For example, in image synthesis, a generative model can be trained on a large dataset of images and then used to generate new images that share similar features and characteristics as the original images.

The key difference between generative models and discriminative models is that discriminative models are designed to classify or distinguish between different categories of data, while generative models are designed to learn and model the underlying distribution of the data and generate new samples from that distribution.” (ChatGPT)

GPT: Generative Pre-Trained Transformer

A Generative Pre-trained Transformer (GPT) is a type of language model that uses deep learning techniques to generate text. GPTs are based on a type of neural network called a transformer, which is designed to process sequential data, such as text.

GPT models are pre-trained on large amounts of text data using an unsupervised learning approach. During pre-training, the model learns to predict the next word in a sentence based on the previous words. This process enables the model to capture the statistical patterns and relationships between words in the text corpus.

Once the GPT model is pre-trained, it can be fine-tuned on a smaller amount of labeled data for a specific language task, such as language translation, text summarization, or question answering. During fine-tuning, the model learns to adapt its pre-trained knowledge to the specific task and produces more accurate and relevant outputs.

GPT models have been used in various applications, such as natural language generation, chatbots, language translation, and text summarization. The most recent version, GPT-3, is one of the largest and most advanced language models to date, with over 175 billion parameters, and has demonstrated remarkable capabilities in generating human-like text and performing a wide range of language tasks. .” (ChatGPT)

Multimodal AI

Multimodal AI refers to the field of artificial intelligence that involves processing and analyzing multiple modes or modalities of data, such as text, speech, images, videos, and sensor data, to gain a more comprehensive understanding of the world.

Multimodal AI systems are designed to integrate information from multiple sources to enhance their performance in various tasks, such as object recognition, speech recognition, natural language understanding, and human-computer interaction. By combining different modalities, multimodal AI systems can compensate for the limitations and ambiguities of individual modalities and provide more robust and accurate results.

Multimodal AI involves the development of algorithms and models that can process and fuse information from multiple modalities, such as using computer vision to recognize objects in images or videos and natural language processing to understand text or speech descriptions of those objects.

Multimodal AI has numerous applications, including autonomous vehicles, healthcare, education, entertainment, and robotics. For example, in healthcare, multimodal AI systems can integrate data from various medical sensors and devices to provide more accurate diagnosis and treatment recommendations. In autonomous vehicles, multimodal AI can combine visual and audio data to enable safer and more reliable navigation.” (ChatGPT)

Turing Test

The Turing Test is a test of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. It was proposed by British mathematician and computer scientist Alan Turing in 1950 as a way to measure the intelligence of a machine.

The test involves a human evaluator who engages in natural language conversations with two entities, one being a human and the other being a machine. The evaluator is not aware of which entity is human and which is the machine. If the evaluator is unable to distinguish between the two entities and perceives the machine’s responses as if they were from a human, then the machine is said to have passed the Turing Test.

The Turing Test has been the subject of debate and criticism since its proposal. Some argue that the test is not a reliable measure of machine intelligence because it only tests the machine’s ability to simulate human-like behavior in a specific context. Others argue that passing the Turing Test does not necessarily mean that the machine has true intelligence, as the test does not evaluate the machine’s ability to think or reason like a human.

Despite its limitations, the Turing Test has played a significant role in the development of artificial intelligence, inspiring researchers to develop intelligent machines capable of natural language processing and other cognitive tasks.” (ChatGPT)

DISCLOSURE: Many of the above definitions were created by ChatGPT with minor editing.

 

0 Shares

Posted on: September 7, 2023, 6:12 am Category: Uncategorized

One Response

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.

  1. This was a walk down memory lane! Thank you. Glad I can generally leave AI to others, including one of my sons who is a SEO and doing deep dives into Chat GPT and others. I retired before AI hit the scene but it is, I think, vital to keep learning. Well done Stephen.