Indirect Risks, Unproven Mainstream Appeal

[ad_1]

Over the past few months, I’ve been reading with increasing fascination the tech segment’s obsession with high-profile apps like ChatGPT, the poster child of generative AI apps that have burst on the scene.

In my more than 25 years in technology, I’ve never seen anything attract the attention of a new capability like Gen AI. More intriguing is not merely the obsession but the fact that many high-profile companies have been caught off guard by the media and customer interest and still can’t clearly articulate how they’ll be able to participate in the industry’s mad gold rush.

Equally beguiling is how the industry is waiting with bated breath in anticipation of Apple’s presumed unveiling of its AR/VR products at its WWDC conference in June or later in the year. While most companies with existing AR/VR products (Meta comes to mind) would generally dread a big competitor like Apple getting into the category because of its industry clout and brand appeal, this may not be the case.

Let me explain.

Tepid Appeal of Current MR, VR Headsets

Noted analyst Ming-Chi Kuo thinks that investors have, up to now, overstated consumers’ genuine interest and demand for mixed reality and virtual reality headsets. Apple’s entry into the AR and VR space might change that.

Recently, Kuo wrote that consumers might not be quite ready to adopt AR and VR just yet as there isn’t enough compelling proof that augmented-reality headsets will become the newest craze in consumer electronics.

In his opinion, the mixed-reality headset from Apple is “perhaps the final opportunity for convincing investors that the AR/MR headset device might have a shot to be the next star product in consumer electronics.”

Kuo doesn’t make this assertion without evidence, noting that there has been a decline in the market-wide manufacturing and sales of virtual reality headsets.

A telling example: Sony has decreased their PS VR2 headset production forecast by 20% for 2023. Moreover, Meta’s Quest Pro had only delivered 300,000 units. Pico, the biggest AR/VR headsets manufacturer in China, fell more than 40% short of its shipping targets in 2022. These facts do not characterize the AR/VR headset as mainstream.

All Eyes on WWDC 2023

Against this not-very-exciting market backdrop, Apple is rumored to unveil its long-anticipated mixed-reality headset. Kuo has publicly stated that he thinks the gadget will debut in the third quarter of this year, although many others believe it will debut at WWDC 2023.

Tim Cook has repeatedly expressed his support for an Apple augmented reality headset. However, other Apple engineers reportedly worry that the company’s entry into virtual and augmented reality might be a costly failure as it may not be ready for prime time from a relevant usage model standpoint.


In my view, what people really need is a good reason to get one rather than a fancy new Apple gadget. After all, many industry experts believe that Apple will announce these new headsets at decidedly “non-mainstream” price points, in the $3,000 or above range. With that type of price point and a recession on the horizon, those factors could be major headwinds even for Apple.

VR gaming is exciting for some die-hard gamers, but casual games have a considerably larger market share and don’t require headsets. Businesses can absorb higher price points as AR/VR headsets have compelling usage models in the operations, warehousing, and medical spaces, but the volumes are not huge.

Movies are interesting, but how many people like to interact while watching television rather than being walled off in their little private theater? I apologize for my yawn.

This last point leads me back to Apple.

Immersive FaceTime Experience

I predict Apple has been waiting to develop a mainstream usage model that appeals to a broad audience, regardless of price points. I believe it will be some type of AR/VR implementation of FaceTime.

FaceTime revolutionized peer-to-peer video calls and took it from the realm of something only IT or tech enthusiasts would engage into something so casual that a grandmother now doesn’t think twice about it.

FaceTime

FaceTime on macOS (Image Credit: Apple)


Yes, the price points for these new Apple headsets will be high as the premium hardware which will be needed is crucial to avoid amateur hour experience, but Apple will point to the future, and those price points will come down quickly as the market ramps.

If Apple can generate an immersive FaceTime experience that allows a user with an Apple headset to perceive they are in the actual location that another user or users are in, it will be a game-changer like none other. So, in that respect, the AR/VR space needs Apple to be successful. As the saying goes, a rising tide affects all boats, and the industry knows that.

Generative AI Is Today’s ‘Gold Rush’

To put it mildly, investors, the tech industry, and the general public have embraced generative AI in ways I’ve never seen. Yet, I believe they are ignoring a crucial risk.

The tech world went bonkers when ChatGPT launched last November and allowed users to ask questions of a chatbot and receive replies generated by AI.

According to many thought leaders, the new technology has the potential to change industries, including media and health care (it recently passed all three parts of the U.S. Medical Licensing Examination). Even HAL from “2001: A Space Odyssey” would be impressed.

To rapidly implement the technology worldwide, Microsoft has already committed billions of dollars in its relationship with the technology’s originator OpenAI and began integrating this capability into its Bing search engine.

Undoubtedly, executives hope this would enable Microsoft to catch up to market leader Google in search, where it has lagged. Ironically, Google has had its series of generative AI setbacks with a less-than-stellar rollout of its Bard capability.

ChatGPT has been the prominent example of what generative AI is capable of, though it’s not the only one. When given a training dataset, generative AI may produce new data based on it, such as images, sounds, or text, in the case of a chatbot.

Significant value may be recognized since generative AI models can produce outcomes much more quickly than people. Consider, for example, a setting where artificial intelligence (AI) creates complex new landscapes and people without the assistance of human sight.

Black Box AI

Yet not every circumstance or sector is a good fit for generative AI. It may provide attractive and practical results for games, videos, photos, and even poems. However, it may be perilous when working with mission-critical systems, in scenarios where errors are expensive, life-threatening, or we don’t want bias.

For example, a health care institution in a sparsely resourced rural region where AI is being utilized to enhance diagnostic and treatment planning. Or imagine a school where a single instructor uses AI-driven lesson planning to customize instruction for various pupils depending on their specific ability levels.


In these circumstances, generative AI would initially appear to provide value but would cause various problems. How can we be sure that the diagnosis is accurate? What about any prejudice that may be present in teaching resources? Those questions are critical issues that need to address.

Models that use generative AI are called “black box” models. As no underlying logic is given, it is hard to understand how they arrived at their results. Even experienced researchers frequently have trouble understanding how such models operate inside. For instance, figuring out what causes an AI to recognize a grass blade image accurately is famously challenging.

You could even have less knowledge of the original training data as a casual user of ChatGPT or another generative model. If you inquire about the source of ChatGPT’s data, it will only respond that it was trained on “a varied variety of data from the internet.” Those types of ambiguous assertions don’t inspire high levels of confidence.

AI-Produced Output Dangers

This situation may result in certain hazardous circumstances. You can’t comprehend why a model produces specific predictions if you can’t see the connections and internal structures that the model has learned from the data or determine which data characteristics are most significant to the model. As a result, fundamental flaws or biases in the model are hard to find or fix.

I’m reminded of a scene from the famous accidental nuclear war motion picture “Fail Safe” where a technology executive tells a government official that computers can make subtle mistakes so subtle that no human could ever challenge those results in real time — and that movie was released in 1964!

Internet users have documented often unintentionally hilarious instances when ChatGPT gave incorrect or dubious replies, ranging from losing at chess to producing Python code that decided who should be tortured.

I attended a recent HP conference where a well-known industry executive expressed support for tools like ChatGPT to assist with the “tiresome” duties of performing employee performance reviews. Imagine the lawsuits that would fly if that became a regular practice.

Now, these are only the instances where the incorrect response was evident. According to some estimates, approximately 20% of ChatGPT responses are made up. It’s possible that as AI technology advances, we’ll live in a time where self-assured chatbots provide answers that sound accurate, and humans can’t tell the difference.

Push Pause on AI?

This commentary isn’t to say that we shouldn’t be enthusiastic about AI, but the world needs to proceed with prudence. Despite the press emotionalism that appears to spike any time Elon Musk comments on something, let’s not dismiss the recent industry letter he and other industry luminaries, including Steve Wozniak, signed asking for a “pause” about new AI implementations.

Unfortunately, the gold-rush mentality is unlikely to slow things without an unlikely government directive, and regulation is years away. I’m also sensitive to the argument that the United States must be the leader in AI for national security reasons, particularly as China becomes a greater threat.


Nevertheless, we should be mindful of the risks and concentrate on ways to use these AI models in real-world settings. More positive AI outcomes could be achieved by training to lower their high false-answer or “hallucination” rate.

Training might not be sufficient, though. We might theoretically create a situation where AI tools are rewarded for delivering results their human judges perceive as successful, e.g., encouraging them to deceive us deliberately by simply training models to generate our preferred outcomes.

It’s possible that things could become worse, and AI apps may develop sophisticated models to evade detection, perhaps even outpacing humans as some have predicted. This scenario could be tragic.

White Box Approach

There is another option. Some companies might employ models like white-box or explainable machine learning instead of concentrating on how we train generative AI models.

A white-box model, as opposed to black-box models like generative AI, is transparent and makes it easier to comprehend how the model derives its predictions and what parameters it considers.

While white-box models may be sophisticated regarding algorithms, they are simpler to understand since they come with justifications and context. When stating what it believes to be the correct response, a white-box implementation of ChatGPT may also indicate how confident it is in that response. For example, is it 60%, 90%, or 100% sure?

This approach would help users determine to what extent, if any, to trust answers and to understand how they were derived. Stated a bit differently, comprehending what data inputs the answer was based on would help users examine multiple variations of the same answer. That’s a step in the right direction.

Of course, this might not be necessary for straightforward chatbot dialogue. However, having such context might be critical in situations where a false answer can have serious consequences (health care comes to mind).

This scenario is significantly less risky than if a physician entirely bases all their judgments on the output of a secret algorithm if they are utilizing AI to make diagnoses but can see how confident the program is in its conclusion.

Human Involvement

From my vantage point, AI will undoubtedly impact business and society significantly. So, let’s leave it up to humans to select the appropriate AI technique for each circumstance.

Having a human as part of the AI calculus loop might seem quaint, but it could precisely be what is needed to earn users’ trust, credibility, and accountability.

[ad_2]

Source link