Separating the Wheat From the Chaff: Identifying the Benefits of Artificial Intelligence | Science Societies Skip to main content

Separating the Wheat From the Chaff: Identifying the Benefits of Artificial Intelligence

By Tyson L. Swetnam, Ph.D., Research Assistant Professor, BIO5 Institute, The University of Arizona
June 27, 2023
New cell phone apps specifically designed for farming using LLMs and computer vision platforms will be coming online soon. Photo courtesy of Adobe Stock/tuelekza.
New cell phone apps specifically designed for farming using LLMs and computer vision platforms will be coming online soon. Photo courtesy of Adobe Stock/tuelekza.

In the last four months, a new technology has grabbed online headlines: OpenAI’s artificial intelligence (AI) ChatGPT. ChatGPT is part of a growing family of AI models known as “large language models” or LLMs. Major technology companies are now racing to catch OpenAI or to design “extensions” and “plugins” that use its API (application programming interface).

These LLMs require many thousands of hours “training” on the largest computer systems ever built, using as much human knowledge as possible. Essentially, the model is fed the bulk of information that has accumulated on the internet, e.g., Wikipedia, scientific journal publications, and digital libraries of magazines, books, movies, music, and art. Once trained and tuned, LLMs can react and respond in real‐time to humans who engage in a conversation with them. However, these training and tuning procedures do not allow LLMs to be “explainable.” No one can say how, why, or what resources caused the LLM to generate a specific response.

Why Should You Care?

Whether generative AI, like ChatGPT, will be as consequential in our lives as personal computers (PCs) were in the 1980s or the release of Apple’s iPhone in the early 2000s is yet to be determined. It is more likely that LLMs will improve our lives in the same way productivity software like Microsoft Word and Excel has or Google’s search engine did in the 1990s. I believe LLMs are a massive leap forward.

Perhaps the main reason you may want to use an AI like ChatGPT is that it can save your most valuable asset: time. Early reports are suggesting LLMs like ChatGPT result in a >50% improvement in overall productivity (Noy & Zhang, 2023). What that means is using an AI assistant can help you get more work done in (far) less time.

You should keep in mind that these new AIs are best used as tools and assistants to help facilitate the things that you already do rather than thinking or making decisions for you. The technology behind LLMs limits their capacity to “reason” or actually “think”—they are artificial narrow intelligence and not artificial general intelligence or super intelligence like you have seen on television or in movies or read about in science fiction.

Three AI Tools to Know About

Amidst the gold rush of investment going into new LLM technology (early investors in Google, Apple, Facebook got rich and a similar outcome is where capital thinks AI is going), there are three AI platforms you should know about right now:

1. OpenAI ChatGPT

OpenAI has received some of the largest investments in AI over the last five years. It was OpenAI’s flagship ChatGPT that captured the world’s attention with its unprecedented capabilities and a multitude of functionalities. ChatGPT is just five months old and has already amassed more than 100 million users within two months of its public release, the fastest user base growth of any internet platform in history.

The latest model, GPT‐4, which powers ChatGPT, can provide responses to contextual questions, help write and debug computer code, or write creative essays, sonnets, or poetry. It has aced every major standardized test (OpenAI, 2023). This is especially impressive given its lack of knowledge of “facts” in a traditional sense; its knowledge is based on statistical probabilities of relationships between words and responses.

ChatGPT uses “prompts”’ to create its responses. By customizing your own prompts, you can narrow down the types of responses that ChatGPT provides. This is a critical step because ChatGPT is initially drawing from its entire network of trained information. Once you tell it to limit its areas of expertise and to respond in a specific way, it can provide contextual answers with more specificity and relevance.

Researchers have already started to create curated lists of example prompt styles (see https://github.com/f/awesome‐chatgpt‐prompts).

To get started using ChatGPT for free, go to https://chat.openai.com/ and create an account. Paying $20 per month gives additional features and benefits but is not necessary to play around to discover whether it adds value to your work.

There is also a professional form of ChatGPT, which uses OpenAI’s API https://platform.openai.com/. You are charged by “token” and can explore a wider array of models that use LLMs (trained on speech, imagery, computer code, or text) or leverage many of the external platforms being developed around OpenAI (see HuggingFace below). For novice users of ChatGPT, there is currently no advantage to the professional service.

Illustration courtesy of Graf Vishenka. 

2. Microsoft Bing and Google BARD

As with most novel technology, the innovation is first seen in expert use of stand‐alone products (e.g., ChatGPT) that must be sought out or are otherwise inaccessible to the masses. Then a popular mainstream version is released that lowers the barrier to entry but also reduces the features. With an early investment in OpenAI, Microsoft has already integrated GPT‐4 into its new Bingsearchengine. When used within Microsoft’sEdgebrowser, Bing chat can respond interactively to your prompts. Other integrations with Microsoft’s Office365 productivity software means that you may already be using its AI CoPilot technology without knowing it.

Google was caught somewhat off guard by the release of ChatGPT. It has since released its own competitor, BARD, which has much of the same functionality as Bing. While BARD initially stumbled when it was announced, the fact that Google handles more than 90% of the world’s search requests and has control over such a large portion of information on the internet requires attention. Like Microsoft, Google is planning on integrating BARD with its own productivity software (Docs, Sheets, Drive, etc.).

Microsoft has already integrated GPT‐4 into its new Bing search engine, and Google’s competitor, BARD, has much of the same functionality as Bing. Photo courtesy of Adobe Stock/Rokas. 

3. HuggingFace

HuggingFace, with its yellow smiley emoticon logo, is an American company positioning itself as the place to find and host GPT models, not only those by OpenAI. Thousands of start‐up companies and research software engineers are using HuggingFace to feature their custom AI models, many of them for free, or by leveraging OpenAI’s API keys.

As of today, there are more than 180,000 trained models and 30,000 datasets for computer vision, audio, natural language processing, and multi‐model (e.g., feature extraction and text‐to‐image generation) available on HuggingFace.

One of particular note is “Paper‐QA” in which you can upload multiple PDF documents or other files and query the program for answers. It may even potentially be used to help you write a scientific review article.

Most models, such as Paper‐QA, can be used interactively on the HuggingFace website, and many of them can be downloaded and run locally on your own hardware. Importantly, most of these are open source and freely available for reuse and modification. However, the business model is to charge you for the data processing done (“computes”). Often, for a novice user, this amounts to something like $0.02 per use, but it is something to be aware of and track.

A New Farmer’s Almanac in the Palm of Your Hand

In the coming months to years, new cell phone apps specifically designed for farming using LLMs and computer vision platforms like those in development on HuggingFace, Google, Meta, and OpenAI will be available.

These apps will use LLM models trained on the entirety of our knowledge. They will be fine‐tuned for your geographic region and could provide real‐time information about everything from feed or seed prices, to weather forecasts, to cell phone camera powered image recognition, which can immediately identify weeds, pests, or diseases or measure crop health. These same apps could then send that information to farmers in a co‐op, or request an invoice be sent to your local supply store to prepare a purchase of insecticide or herbicide, all while you’re still in your tractor or driving to a field site.

The potential for innovation using these technologies is vast, and their ability to help improve farming and agriculture is far reaching. Ethics issues and limitations exist; and like all technologies, the sum of benefits, drawbacks, and unintended consequences should be considered. However, playing around with these tools now can help you get a glimpse of, and secure, your future.

References 

Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. SSRN. http://dx.doi.org/10.2139/ssrn.4375283 

OpenAI. (2023). GPT‐4 technical report. arXiv, 2303.08774 [cs.CL]. https://doi.org/10.48550/arXiv.2303.08774

Members Forum is a place to share opinions and perspectives on any issue relevant to our members. The views and opinions expressed in this column are not necessarily those of the publisher. Do you have a perspective on a particular issue that you’d like to share with fellow members? Submit it to our Members Forum section at news@sciencesocieties.org. Submissions should be 800 words or less and may be subject to review by our editors‐in‐chief.


Text © . The authors. CC BY-NC-ND 4.0. Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited.