How Big Is Your Prompt?

Why AI Fluency Is Not the Same as Expertise, and What That Means for Professional Engagement

By John Cronin

Abstract

Large language models are changing how people prepare for professional engagements, particularly in complex fields such as intellectual property. Increasingly, individuals approach patent attorneys and IP consultants with AI-generated drafts, believing that fluent output equates to strategic readiness. This paper reframes that assumption. The central question, “How big is your prompt?”, is not about length, but about the depth of expertise behind it. Prompt size becomes a proxy for structured understanding: technical knowledge, prior art awareness, enablement discipline, and strategic judgment. Where expertise is thin, even a polished AI draft can introduce hidden uncertainty. Where expertise is strong and clearly defined, AI becomes an accelerator rather than a substitute. The real issue is not whether AI can draft patents, but whether the user understands what drafting actually requires. In the end, prompt size must be aligned with expertise density. AI is most powerful when it operates within that boundary.

 

1. The Question That Changes the Conversation

How big is your prompt? I use the question because it forces a person to confront the part of AI they do not want to see. Most people want to talk about output: how good it looks, how fast it arrives, how professional the language sounds. Professionals, however, live on inputs: what is true, what is provable, what is novel, what is defensible, and what is strategically safe. When you hand a professional an AI output, you are not handing them finished work. You are handing them a compressed uncertainty that they now must expand, audit, and correct, often under time pressure.

Clients rarely realize this because the AI’s fluency hides the cost. The document looks complete, so it feels complete. But completeness is not the same as correctness. And in patent work, where a single sloppy drafting choice can create a lifetime of prosecution or sale or licensing diligence friction, the difference matters.

 

2. The Coffee Grinder Example, and Why It Matters

Imagine you invent a coffee grinder. You can grind at different speeds. You can operate at different temperature ranges. You can control pressure. You are proud of it; it’s an idea, of course, and you haven’t really understood the patent process, but you think that maybe just with the idea you can get a patent. You should be proud of the idea, of course. Don’t get me wrong. I don’t know how much time you put into it, but let’s just say you haven’t put in a lot of time or you haven’t thought really about the patent process. You want to proceed with your coffee grinder invention and you want to bring out a professional. You also want a patent. You open a chat engine and ask it for a patent draft. The output is impressive. It uses patent-style language. It sounds like something you would pay for.

Now you send that output to a patent attorney or IP consultant that helps you expand it in an inventor-led or business-grade level of thinking, and you assume you have helped. The attorney or IP consultant now has a problem that is not visible to you: they must spend valuable time on the clock to determine what in that document is invention, what is filler, what is unsupported, what is common practice, and what will be rejected immediately. A question I would ask is, did you tell the professional that you used a chat engine to develop this?

The professional must identify which drafting decisions make sense (method, system, composition) and which ones will survive the first encounter with the Patent Office. They must decide what can be said broadly, what must be said narrowly, and what cannot be said at all.

 

“This is going to take me longer because I’m going to have to deal with all this information. I’m going to have to read it. I’m going to have to understand it… and distinguish from what I normally do… to this.”

 

That is not a complaint. It is the economics of responsibility. The professional is not paid for producing text. The professional is paid for making decisions that have consequences and for carrying the risk that comes with those decisions. AI changes the workflow, but it does not remove the risk. It often increases it, because it tempts people into skipping the slow parts that actually create quality. To most inexperienced people in the patent process, enablement is not only important, but it bases itself solely on inventorship of what you can protect. The information you send to an IP consultant or patent attorney needs to be such that you know who the inventor is. A chat engine output can really damage the credibility of inventorship and the oath of declaration you take when you file the patent.

 

3. Prompt Size Is a Proxy for Expertise

When I say “How big is your prompt?” I am not only asking about length. I am asking whether the prompt contains the kinds of questions an expert automatically asks. Real experts do not think in one prompt. They think in parallel: novelty versus prior art, scope versus enablement, enablement levels versus examiner behavior, wording versus prosecution history, ranges versus conventional practice. That parallel thinking is the product of years of repetition and failure. It is not a list you can tack on at the end. Unless my thinking is incorrect, the current chat engines don’t think in the parallelism of an expert. They think in setting up a plan of things to check off, but the recursiveness is something that needs to be programmed by the developer of the chat engine. Generally, recursiveness may happen one time, but in most really good chat engines, or those entities that own the LLMs, for very important projects they have it spend hours and hours checking itself recursively. That is what an expert does. It already knows how to check itself recursively because it’s had multiple years with the same problem. You’re not going to get the same level of recursive thinking that a chat engine is going to output. At most, it may have recursed itself once.

 

“The prior art… method of making or composition of matter or system… which is the ones more likely to get through the patent office… Those are prompts in the expert’s head, rapid-firing almost in parallel.”

 

This is why an expert with a chat engine becomes more valuable than a non-expert with a chat engine. The expert knows what to ask, what to doubt, and what to verify. The non-expert sees fluent output and mistakes it for a finished product. I would take an expert without a chat engine any time over someone who really isn’t an expert using a chat engine.

 

4. The Missing Ingredient: Real Data

A patent is not a brochure. A patent is a legal instrument built on technical assertions. The quality of the specification depends on how well the drafter can ground the invention in real, defensible data and how well the drafter can choose ranges that are both credible and strategically differentiated from the prior art. One of the most dangerous habits in AI drafting is range invention: plausible-sounding numeric ranges that have no experimental basis.

 

“What kind of real data do we have that we can input into the specification? … Go off and read ten scientific journals, and find the right ranges… find the novel sweet spot. Add that to the prompt.”

 

If you want AI to approximate professional work, the prompt must contain what professionals normally supply: what is known in the field, what ranges are customary, what the literature supports, what your experiments show, and where the true inventive ‘sweet spot’ is. Without that, you are asking the model to invent the evidence. It will comply. Have you actually considered adding this kind of information to your prompt, or are you just asking it to create all this information, thinking that it will? To improve a coffee system with grinding speeds, temperature, and pressure, asking it at this level will not produce an invention. Asking it to do all the other things that are necessary to produce real data is what is required to really have a really good invention.

 

5. What Earlier Patent-Search Benchmarking Already Showed

Years before today’s mainstream LLM adoption, we ran a practical benchmark on AI patent search behavior across multiple technical domains. The objective was not philosophical. It was measurable: how much text is required to reliably retrieve the correct target patent? The answer was not “a little.”

Across domains, reliability improved as the input became richer. Below a threshold, retrieval performance was unstable. Above a threshold, performance became predictable. The headline conclusion is simple: if you provide too little signal, the system cannot reliably align to the correct target, because it cannot infer what you did not supply. Also, another headline is that it’s just not the number of words or phrases. It’s knowing where in the documents, or what type of information you can customize. For instance, inventions around articles of manufacture are very different than software systems. The context of the domain is as important as the number of words, and where in the document you search for, et cetera.

The original experiment document concluded, in plain terms, that abstract searches tended to require more than roughly twenty words, if it was chosen in the right part of the patent and if it was in the right domain. What we found was that for stability, sixty words of meaningful input often produced the best results, with differences by domain. Chemical compositions behaved differently because the vocabulary is more standardized; medical domains required more meaningful signal because the corpus is diverse and terminology is less rigid. So, for instance, a good prompt of 45 to 50 words in one domain might work, but in other domains 100 words might be required. In other places, prompts of 500 words might be good, but it depends upon the area. Would a user of a prompt understand the minimum number of prompt words needed for good results? And not to lose total credibility here, it’s just not the number of words; it’s how compact those words are for a prompt. It’s not just one prompt. It’s how many different prompts you can get an engine to understand.

Ask about one piece of information and it’ll give you an answer, and then the chat engine will extrapolate to other areas generically and it may hallucinate. But ask about one piece of information and actually “bind” it on other areas and add it to improve thinking on other areas; this is vital for prompt engineering. It’s not just the number of words; it’s how you think about those words, how many prompts in how many domains you need, and how you direct it, in what order and how deep to search is key! How big is your prompt?

 

Figures 1 and 2: Reconstructed Prompt-Size Effects from the IP Study Benchmark

The figures below reconstruct the qualitative behavior described in the benchmark conclusions: normalized relevancy stabilizes only after the prompt carries enough domain-specific signal, and retrieval rank improves as the prompt grows. These are reconstructions meant to visualize the reported thresholds and comparative domain behavior; they are not a reproduction of raw measurement tables.

Figure 1. Reconstructed IP Study benchmark behavior: prompt size vs. normalized relevancy (by domain).

Figure 2. Reconstructed IP Study benchmark behavior: prompt size vs. retrieval rank (lower is better).

 

6. The Professional Cost That Clients Don’t See

When a client sends AI output to a professional, the professional must choose between two bad options. They can be blunt and explain that the output is unreliable, which risks damaging the relationship. Or they can be polite and silently absorb the extra work. Many professionals choose politeness. That politeness has a cost.

 

“Don’t get fooled… you can simply get in a chat engine, ask it a generic question and expect this enormous output that looks totally professional and then give it to an expert who then has to cope with it.”

 

This is the core misunderstanding: AI output often increases the professional’s workload because it increases the number of things that must be checked. A draft written by an expert is not just text; it is a set of hidden decisions already made. An AI draft is text that impersonates decisions without actually making them.

 

7. Why “Summarize This” Is Safer Than “Do My Job”

There are many legitimate uses of chat engines. Summarizing a document you provide is often safe because the context is bounded and the task is constrained. Asking for a restaurant recommendation is often safe because the answer can be verified quickly. The danger is to generalize that success to every domain.

 

“When you give it a document… if you ask it to summarize the document, you are much safer. The context it has is the document.”

 

The moment you ask the model to replace professional judgment, you change the nature of the task. In patent work, the question is never only “write.” It is “choose.” And choosing requires knowing what matters, what is supported, and what the Patent Office and adversaries will do with your words.

 

8. The Marketing Trap: The Chat Engine That Sells You More Curiosity

Modern chat engines are not neutral. They are designed to be helpful, and “helpful” often means generating more prompts. The model answers, then proposes next steps, then invites expansion, then offers additional tasks. This creates a feeling of progress. It is also consumption.

 

“Now the chat engine is answering your question and asking you for other things to do… It’s building its own business by creating more curiosity in you.”

 

There is nothing inherently wrong with that dynamic in low-stakes learning. But in high-stakes professional work, curiosity must be disciplined. Otherwise the user replaces structured inquiry with an endless chain of plausible expansions.

 

9. Professional Responsibility in the AI Era

The professional world is already reacting. Patent and legal communities have begun issuing guidance on responsible generative AI usage, emphasizing confidentiality, verification obligations, and the continued duty of professional judgment. For example, European patent-attorney guidance documents explicitly frame generative AI as a tool that requires supervision and responsibility rather than a replacement for professional work. Although all the regulators are dealing with regulation (and this is very helpful, by the way), it’s not dealing with the general idea of where AI chat engines help a user to hire a professional, which is a subject of this paper.

In so doing, when a chat engine is used to prepare for interaction with a professional, there are no guardrails, there are no boundaries, there are no rules. The professional has to deal with the incoming, not understanding where information is coming from. However, the professional does see the input now and they receive it as drastically different. It used to be a phone call or some simple text or center figure drawings, but now it’s a well-structured output that looks like a patent and almost claims to be the first draft.

This new realization between the patent professional (whether it’s a patent attorney or IP consultant) and the potential client is new territory. This is a metaphor for every single professional interaction today. The question about professional responsibility in the AI era, I think, is to call this out. As I have been doing for the last year, dealing with the outputs of AI chat engines as an input to my practice is such that it actually causes more time and it actually lowers the quality. If I can charge for the time, the quality can stay the same.

On the other hand, an AI expert and an IP consultant or IP lawyer expert working in combination together in their back office seems to be a very viable way to both improve quality and lower the cost. This professional responsibility in the AI era is a rather dynamic period. The patent office will at some point recognize this in their own office for examination and patent procedures. Maybe even the MPEP, the Manual for Patent Examination Procedures, might have to change.

Public institutions are also grappling with AI in patent workflows. The USPTO has asked for public comment on the impact of AI-generated prior art and how it affects the PHOSITA standard, signaling that the landscape of what counts as accessible prior art is changing. At the same time, modernization efforts have included major investments in AI-enabled search and examination tooling.

 

10. The Provocation: Stop Sending AI Drafts to Professionals as Proof You’re Smart

Here is the provocation, stated plainly: sending AI output to a professional as evidence that you are sophisticated is usually the opposite. It signals that you do not know how much work sits underneath real expertise. It signals that you are confusing a document with a strategy.

If you want to work well with a professional, send them what only you can send: your actual experimental data, your actual constraints, your actual goals, your actual market reality, and your actual prior art fears. If you want to use AI, use it to organize and clarify those inputs, not to substitute for the professional’s judgment.

 

11. The Counter Argument

A serious counter-argument begins by challenging the premise that AI-generated drafts necessarily increase cost or degrade quality. In many cases, the opposite may be true. A well-used language model can function as a cognitive forcing mechanism for the inventor, requiring them to articulate assumptions, enumerate embodiments, and clarify terminology before ever engaging a professional. Even if the draft is imperfect, it can expose ambiguities that would otherwise surface only after billable time has begun. From this perspective, AI output is not “compressed uncertainty,” but structured pre-thinking. The professional’s role then shifts from generating first-pass language to refining, constraining, and strategically elevating material that has already been pressure-tested in draft form. That is not added burden; it is leverage.

Further, the argument that only experts think recursively or in parallel may underestimate the evolving capabilities of modern models and the sophistication of users. Iterative prompting, retrieval-augmented generation, multi-step verification workflows, and domain-specific fine-tuning allow AI systems to approximate forms of recursive analysis that resemble expert review cycles. A disciplined user can instruct the model to identify prior art risks, stress-test enablement levels and for attorneys claim scope, challenge enablement, and critique its own reasoning before any professional ever sees the document. While this does not replace expert judgment, it can meaningfully reduce the surface area of naive errors. In that sense, “prompt size” becomes less a proxy for pre-existing expertise and more a skill that can be learned and improved, democratizing aspects of early-stage analysis that were once inaccessible.

Further, there is a broader efficiency argument. Historically, the patent system has been cost-prohibitive for many inventors and early-stage companies. If AI tools allow a wider population to engage with the structure of patent drafting (understanding patent concepts like enablement, embodiments, and prior art considerations), then the technology may expand access rather than erode quality. Professionals who integrate AI intelligently into their workflow may achieve both improved quality and reduced cost, not by replacing judgment but by reallocating human effort to the highest-value decisions. In that model, AI-generated drafts are not acts of arrogance or confusion; they are signals that clients are attempting to engage more deeply with the process. The strategic question then becomes not whether clients should use AI before contacting professionals, but how both parties can establish clear boundaries and workflows that convert preliminary AI work into structured, efficient collaboration.

Finally, there is an important counterpoint here. It would be a mistake to suggest that only those with decades of experience can develop sound patent strategy. I have seen — and participated in — what I would call micro-accelerated educations in niche domains, where disciplined use of AI, repeated prompt refinement, prior art searching, examiner pattern analysis, and strategic iteration can meaningfully compress the learning curve. An inventor who carefully refines multiple generations of prompts, integrates feedback from earlier outputs, performs real domain research, and studies prosecution history is not operating at a novice level. That process can produce genuine understanding. AI can accelerate exposure. It can accelerate pattern recognition. It can accelerate conceptual integration.

But here is the distinction that matters: accelerated understanding is not yet seasoned judgment. There is a difference between learning the structure of patent strategy and internalizing its consequences. The recursive thinking of an expert is not just iteration — it is iteration under risk, under client pressure, under litigation hindsight, and under regulatory constraint. AI can compress learning cycles, but it does not compress lived accountability. So the real question is not whether non-experts can grow strategically competent using AI. They can. The question is whether they recognize when they have reached the boundary where education must give way to professional responsibility.

 

12. The Final Assessment

Where do we finally land? Not in the simplistic conclusion that “bigger prompts are better” or that “AI is dangerous.” The real position is more disciplined and more uncomfortable: prompt size only makes sense when measured against expertise. A small prompt from a true expert can be extraordinarily powerful because it compresses years of recursive thinking, domain knowledge, and strategic judgment into a few precise instructions of a huge and powerful research tool called the chat engine. A large prompt from someone without that structure behind it is often just verbose confusion. So the question is not merely how many words you type. It is how much structured understanding those words represent.

Even experts are not immune. An inventor domain expert without IP expertise may produce a technically rich prompt that is IP strategically naive. An IP professional without deep domain understanding in the inventive area may produce procedurally elegant but technically thin output. In both cases, prompt size (large or small) does not solve the problem. The danger zone is not only the novice with a three-sentence prompt instruction. It is also the competent professional operating outside their depth, assuming fluency equals sufficiency. The worst place to be is low expertise paired with confidence and a small prompt. The second most dangerous place is partial expertise paired with overreliance on AI.

For the rare individual who combines domain mastery, IP sophistication, and disciplined communication, AI becomes an accelerator of remarkable magnitude. In that case, the prompt can be “exotic and small” because it sits on top of a massive internal scaffold. The model becomes an amplifier of judgment, not a substitute for it. Quality increases. Speed increases. Cost may decrease. But that scenario depends entirely on the human operator’s depth, not on the model’s eloquence.

So, we return to the question one last time:

How big is your prompt?

The honest answer is that it depends on how big your expertise is. If you have little expertise and a small prompt, you are in the most dangerous position of all. If you build larger and larger prompts, you are, in effect, trying to build the expertise yourself. That is not wrong, but it means you are becoming the professional you thought you were replacing. And that is the quiet conclusion of this paper.

The prompt is not the measure of intelligence. Expertise is.

 

13. Closing

In the end, “How big is your prompt?” is not about token counts or stylistic sophistication. It is about alignment between the complexity of the task and the depth of expertise behind the request. A concise prompt from someone who understands both their technology and the patent framework can be highly effective because it rests on disciplined reasoning and informed constraints. A longer, more elaborate prompt is not inherently better if it extends beyond the user’s true understanding. The issue is not whether AI is capable (it clearly is), but whether the human operator understands what the task actually demands. The real variable is not length. It is judgment.

The call to action, then, is not to avoid AI or to feel hesitant about using it. It is to use it intelligently and proportionately. Take the time to understand the basics of the patent process before assuming drafting can be automated. Use AI to clarify your own thinking, to organize your data, and to sharpen your questions. When you engage an IP professional, bring clarity about your invention, your goals, and your evidence. Your prompt to your expertise, not to the expertise you assume you have, and not to the expertise you expect the professional to supply. AI is a powerful tool when it operates within your zone of competence. Let it strengthen what you already understand and let professionals handle what requires their judgment.