AI prompting? Not to worry. The horse is learning to whisper to us.

the horseless carriage: an awkward stage in the horse-technology relationship

As discussed in the previous LOTRW post, university computer science curricula nationwide are offering courses in AI-prompting – a new discipline, mixing the science and art required to harness the full power of artificial intelligence. A new subject for human beings to master? The prospect is daunting, in part because of the complexity of AI and the diversity of applications for it – and in part because the AI-landscape itself is mutating rapidly. It’s hard to keep up.

Take solace from a Harvard Business Review article of 2023 entitled AI Prompt Engineering Isn’t the Future. The author, Oguz A. Acar, points out that AI – the disease – will itself provide the cure. It is rapidly developing the capacity to assist in this professional niche and may in time take over the whole of it. To extend the metaphor of the earlier post, the horse will be opening a special whispering channel for us – perhaps even softly neighing sweet nothings into our ears.

But Mr. Acar doesn’t really let us off the hook entirely. He goes on replace that challenge with a different but closely related one:

So, what is a more enduring and adaptable skill that will keep enabling us to harness the potential of generative AI? It is problem formulation — the ability to identify, analyze, and delineate problems.

Problem formulation and prompt engineering differ in their focus, core tasks, and underlying abilities. Prompt engineering focuses on crafting the optimal textual input by selecting the appropriate words, phrases, sentence structures, and punctuation. In contrast, problem formulation emphasizes defining the problem by delineating its focus, scope, and boundaries. Prompt engineering requires a firm grasp of a specific AI tool and linguistic proficiency while problem formulation necessitates a comprehensive understanding of the problem domain and ability to distill real-world issues. The fact is, without a well-formulated problem, even the most sophisticated prompts will fall short. However, once a problem is clearly defined, the linguistics nuances of a prompt become tangential to the solution.

Unfortunately, problem formulation is a widely overlooked and underdeveloped skill for most of us. One reason is the disproportionate emphasis given to problem-solving at the expense of formulation. This imbalance is perhaps best illustrated by the prevalent yet misguided management adage, “don’t bring me problems, bring me solutions.” It is therefore not surprising to see a recent survey revealing that 85% of C-suite executives consider their organizations bad at diagnosing problems.

It’s hard to read this without wanting to get better at problem formulation. Happily, Mr. Acar has identified four key elements to the process, along these lines:

Problem diagnosis – identifying the core problem to be solved. Typically, this involves looking deeper than the mere symptoms to discern the underlying problems.

Deconstruction – breaking down complex problems into simpler subproblems.

Reframing – changing the perspective from which the problem is viewed.

Constraint design – bounding the problem.

This last one is a bit more complicated. He puts it this way:

Problem constraint design focuses on delineating the boundaries of a problem by defining input, process, and output restrictions of the solution search. You can use constraints to direct AI in generating solutions valuable for the task at hand. When the task is primarily productivity-oriented, employing specific and strict constraints to outline the context, boundaries, and outcome criteria is often more appropriate. In contrast, for creativity-oriented tasks, experimenting with imposing, modifying, and removing constraints allows exploring a wider solution space and discovering novel perspectives.

Mr. Acar sums up on this note:

Although prompt engineering may hold the spotlight in the short term, its lack of sustainability, versatility, and transferability limits its long-term relevance. Overemphasizing the crafting of the perfect combination of words can even be counterproductive, as it may detract from the exploration of the problem itself and diminish one’s sense of control over the creative process. Instead, mastering problem formulation could be the key to navigating the uncertain future alongside sophisticated AI systems. It might prove to be as pivotal as learning programming languages was during the early days of computing.

In other words, to someone in the business and financial world, harnessing the full power of AI for human benefit looks like a problem (as well as a wonderful curriculum development opportunity) – but for B-schools everywhere as opposed to computer sciences or data sciences departments. Somehow, to an outsider, it seems that the future will look more like “both-and” than “either-or.” Expect the courses in prompting to stick around, as well as the courses in problem formulation. The need for and offerings of university education will continue to balloon…

Did you catch Mr. Acar’s earlier phrase: problem formulation necessitates…[an] ability to distill real-world issues? The next LOTRW post looks at how this is playing out with respect to climate change.

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to AI prompting? Not to worry. The horse is learning to whisper to us.

  1. While I absolutely agree that problem formulation is an essential skill, I foresee three problems in applications with AI.
    • The formulator has to take off their ideological and emotional blinders. Current economic models generally assume a rational investor, investing based on logic. Kahneman’s work especially is important here, pointing out that “it ain’t necessarily so.” AI will find solutions to the problems it’s given, but if the problem formulation is warped by ideology or emotion, then AI’s solution may or may not solve the real problem.
    • What Acar seems to consider are problems with deterministic solutions. It is unclear how well AI can cope with wicked problems. For example, the response of a complex adaptive system to an AI-recommended solution often cannot be accurately predicted. Oh, what chaos might ensue!
    • AI is a logical engine – a highly sophisticated one, but still based on logic. A consequence of Gödel’s theorem is that there are problems that can be logically formulated, but that cannot be solved using the same logic (I think of these as a particularly gnarly class of wicked problems.). Not sure what kind of hash AI will make of these! It is possible that fuzzy logic might be a way around this.

    This is a very good chain of posts, Bill. Much appreciated!

Leave a Reply

Your email address will not be published. Required fields are marked *