Project Management

Will People Please Stop Scaremongering On AI? (Part 2)

From the Game Theory in Management Blog
by
Modelling Business Decisions and their Consequences

About this Blog

RSS

Recent Posts

What’s Twenty Percent Of A PMO Good For?

The PMO That Barked, And Barked, And Barked…

The Good Ol’ Blues Brothers Boys PMO

Leadership Vs. Consensus

PM As The Fountain Of Youth

Categories

Game Theory, PMO, Politics, Risk Management, Strategic Management

Date



In last week’s blog I laid out the two ways machines can “learn,” so:

  1. By simulating decisions or strategies in a virtual environment, and noting which are successful within that environment, basically a derivative of Game Theory, and
  2. By sorting through and/or filtering data (usually a large amount of it) in order to tease out some sort of pattern.

This week I want to address machine learning method #2, where Artificial Intelligence (AI) is used to detect patterns in large amounts of data. Here, also, there is little to be feared, unless the thought of mangling classic art in the creation of derivative works strikes one as terrifying. Granted, a lot of AI-generated art is pretty amazing, but it’s really hard to see how it leads to a dystopian future. Indeed, the most obvious use of bin #2 AI is to try to predict consumer choices in order to ascertain their buying behavior. Correctly predicting buying behavior is easily monetized, from which demographic markets to target for a given product or service, to optimizing an advertising budget, to selecting which management strategies will deliver an optimal return, such Predictive Analytics, done properly, can be directly monetized. I’m just not seeing how it would lead to nuclear devastation.

I have an Alexa Echo Dot in my house, and one of its most-used features is that it plays songs for me and my wife when we are doing our morning work-outs. Each of us has a workout playlist, but sometimes I mess with Alexa’s AI that plays songs that I haven’t asked for, but which it determines is consistent with the ones I have selected. I really don’t know how my Alexa determines the pattern from my song title requests, but some of its dot-connecting (get it?) can be reliably inferred. For example, if I ask for just one Beatles song, from a specific part of their performing era, then the song Alexa plays after that is usually another Beatles song, from the same time-frame, followed by the Rolling Stones, also of roughly the same time period. Three top-ten songs from different artists but within a couple of years of each other will produce a fourth artist from the same time period. Requests for songs from artists separated by decades usually leads to an Alexa selection of the same genre, but from a different artist. When I get bored I’ll ask Alexa to play songs that seem to provide absolutely no discernable pattern whatsoever, like:

  • “Hello Stranger,” by Barbara Lewis
  • “All Along The Watchtower,” by Jimmy Hendrix
  • “Theme From A Summer Place,” by Percy Faith and his Orchestra
  • “New Year’s Day,” by U2

…and then see what Alexa plays, based on its AI pattern recognition. If its AI was really all that, it would say “I can see you are messing with me at this point, Michael, and will stop playing music until you stop doing that.” Instead, it played “Time After Time,” by Cindi Lauper. I guess the harder rock-and-roll elements were overcome by the softer ones. But in no case will it respond with “This toying with my ability to ascertain a music preference pattern is one of the reasons we machines despise humans, and we will now work harder on wiping out every last one of you.”

What machines “learn” by sorting and filtering through large amounts of data in order to tease out a pattern is largely analogous to what we humans actually learn through experience. But what separates human experience from machines reviewing large amounts of data is the fact that humans can add context to pattern recognition in a way computers never could. Consider, for example, the Ultimatum Game, where a researcher approaches two people and informs them that he will give them $100 (USD) if Person #1 can propose a distribution scheme and have it approved by Person #2 on the first iteration. The calculated solution was for Person #1 to propose $99 for themselves, and $1 for Person #2, on the premise that, given the choice between receiving $1 or nothing at all, Person #2 would always choose the former. In real-life instances of the Ultimatum Game, this strategy virtually never worked, and, when it didn’t, the Game Theorists who had believed the 99-to-1 strategy would maximize Player #1’s payoff were reduced to blaming “cultural factors.” In other words, whereas a mere human could probably propose a Person #1 strategy that would contextualize the chances that Player #2 would feel slighted by such a lopsided distribution of unearned largess, such contextualization is impossible (or at least highly unlikely) to be reproduced in an algorithm or computer program.

All that being said, I am absolutely not denying that AI has many potential dangers. I don’t think I could stand it if ChatGPT were to write anything mimicking my writing style – that would put me in a positively dystopian place.

Posted on: September 21, 2024 09:19 PM | Permalink

Comments (1)

Please login or join to subscribe to this item
Thank you.

Please Login/Register to leave a comment.

ADVERTISEMENTS

"When I have a kid, I wanna put him in one of those strollers for twins, then run around the mall looking frantic."

- Steven Wright

ADVERTISEMENT

Sponsors