Blog

  • Ask AI to Think Out Loud

    Last post we looked at how giving AI more context leads to better results.
    This week, there’s one small addition to your prompts that can make a
    surprising difference — and it takes three extra words.

    Ask it to think step by step.

    When you ask ChatGPT or Copilot a straightforward question, it gives you a
    straightforward answer. That’s fine for simple tasks. But for anything
    involving reasoning — working through a problem, making a recommendation,
    writing something with a logical structure — the AI can rush to a conclusion
    just like a person might when put on the spot.

    Why Does This Work?

    Adding “think step by step” or “explain your reasoning” is like asking a
    colleague to slow down and walk you through their thinking rather than just
    handing you an answer. You get to see how they got there, spot anything that
    doesn’t add up, and end up with a result you actually understand and trust.

    It sounds almost too simple. It works anyway.

    When to Use It

    This isn’t something you need every time. It earns its place when:

    • You need the reasoning, not just the answer. Asking AI to help you decide
      between two options? Ask it to weigh them up step by step rather than just
      pick one.
    • The task has multiple moving parts. Anything involving sequencing, planning,
      or structured thinking benefits from this approach.
    • You want to catch mistakes. When the AI shows its working, errors are much
      easier to spot than when it just hands you a conclusion.
    • You’re explaining something to someone else. If you need to justify a
      decision or communicate a process, a step-by-step breakdown gives you the
      structure ready-made.

    3 Office Examples

    Scenario 1: Deciding between two options

    Instead of: “Should I use Excel or a project management tool for tracking our
    team’s tasks?”

    Try: “I need to track tasks for a team of 6. We currently use Excel but it’s
    getting messy. Compare Excel and a project management tool like Trello or
    Asana for our situation. Think through the pros and cons of each step by step
    before giving me a recommendation.”

    You’ll get a reasoned comparison you can actually present to your manager, not
    just a one-line answer.

    Scenario 2: Planning something with multiple steps

    Instead of: “How do I organise a team away day?”

    Try: “Help me plan a one-day team away day for 12 people. Walk me through each
    step I need to take, from initial planning through to the day itself. Include
    things I might forget.”

    The “walk me through each step” instruction does the same job here. You get a
    checklist rather than a paragraph.

    Scenario 3: Understanding a decision before you commit to it

    Instead of: “Is it worth upgrading to Microsoft 365 Business Premium?”

    Try: “We’re a small office of 10 people currently on Microsoft 365 Business
    Basic. Explain step by step what we’d gain from upgrading to Business Premium,
    what it would cost, and whether it’s likely to be worth it for a small
    non-technical team.”

    Seeing the reasoning laid out makes it much easier to take that to whoever
    holds the budget.

    A Small Tweak, A Big Difference

    You don’t need to use the exact phrase “step by step” — variations work just
    as well:

    • “Walk me through your reasoning”
    • “Explain how you got to that answer”
    • “Break this down for me”
    • “Take me through this one step at a time” The common thread is that you’re asking the AI to show its work rather than
      just deliver a verdict. That 20 extra seconds of typing gives you something
      you can follow, question, and build on — rather than just accept. AI works best when it thinks out loud. All you have to do is ask.

    Next post:

    how to use AI to get more out of the tools you already use every
    day — starting with Excel.

  • Stop Googling, Start Prompting

    Most people use ChatGPT or Microsoft Copilot (or Gemini, or Claude) the same way they use Google —
    type a few words, hit Enter, hope for the best. And then wonder why the answer
    feels generic, off-topic, or just not quite right.

    Here’s the thing: it’s almost never the AI’s fault. It’s the question.

    The difference between a useful AI response and a frustrating one usually
    comes down to how much you tell it. Google is designed to guess what you mean
    from a few keywords. AI tools like ChatGPT are designed to follow instructions
    — and the better your instructions, the better your result.

    The Problem With Treating AI Like a Search Engine

    When you type “email template” into Google, it works fine. You get a list of
    pages with email templates. You pick one, copy it, done.

    Type “email template” into ChatGPT and you get… an email template. A bland,
    generic one that doesn’t sound like you, doesn’t fit your situation, and
    you’ll end up rewriting half of it anyway.

    That’s not the AI failing. That’s you asking Google-style when you have a much
    more powerful tool in front of you.

    The 3 Ingredients of a Good Prompt

    You don’t need to become a “prompt engineer” or learn anything technical. You
    just need to give the AI three things:

    1. Context — what’s the situation? Tell it who you are, who you’re talking to, or what’s going on. You don’t need an essay, just a sentence.

      ▎ “I’m a project manager at a small logistics company…”

    2. The task — what do you actually want? Be specific.

      Not:

      ▎ “write an email”

      but:

      ▎ “write a follow-up email to a client.”

    3. Constraints — any limits or preferences? Tone, length, format. “Keep it under 100 words” or “make it friendly but professional” goes a long way. Put it together and instead of:

      ▎ “email template”

      Try:

      ▎ “Write a short, friendly follow-up email to a client who hasn’t responded to my proposal in two weeks. Don’t be pushy — just check in and offer to answer any questions. Keep it under 100 words.”

      That second prompt takes 20 extra seconds to write. The output is something
      you could actually send.

      And don’t expect AI to be perfect. Add your finishing touches. That is the satisfying part, you can focus on making the email your own after AI has done the heavy lifting.

    3 Examples From a Typical Office Day

    Scenario 1: You need to summarise a long email thread before a meeting

    Instead of:

    ▎ “summarise this email”

    Try:

    ▎ “Summarise this email thread in 3 bullet points. I need the key decisions
    made, any outstanding actions, and who is responsible for each. I’m preparing
    for a 15-minute catch-up meeting.”

    Scenario 2: You need to push back on a request without causing offence

    Instead of:

    ▎ “help me write a professional email”

    Try:

    ▎ “Help me write a polite but firm email declining a request to take on
    extra work this month. I want to sound helpful and offer an alternative
    timeline without sounding like I’m just saying no. Keep the tone warm.”

    Scenario 3: You’re preparing talking points for a presentation

    Instead of:

    ▎ “presentation tips”

    Try:

    ▎ “I’m presenting our Q1 sales results to senior management next week. The
    numbers are mixed — some targets hit, some missed. Give me 5 talking points
    that acknowledge the shortfalls honestly but keep the focus on what we’re
    doing differently in Q2.”

    In each case, the extra detail isn’t busywork — it’s what lets the AI actually
    help you.

    One Habit to Start Today

    Before you hit Enter on your next AI prompt, ask yourself one question:

    “Would a colleague know exactly what I need from this?

    If the answer is no, add one more sentence. That’s it. You don’t need a
    perfect prompt — you just need enough context that a reasonably smart person
    could help you.

    AI tools are at their best when you treat them less like a search engine and
    more like a capable colleague you can brief properly. The more you tell them,
    the more useful they become.

    Next Post: how to get better results by asking the AI to think step by step —
    and why it makes a surprising difference.

  • Why did the banana go to the doctor? Getting better answers out of generative AI

    “Why did the banana go to the doctor?”
    Because he wasn’t peeling well!

    That was the identical answer given when I asked “Tell me a joke about a banana” by ChatGPT, Claude and Gemini. My 4 year old grand-daughter told me that joke, so is AI intelligent? It is not. AI is as dumb as a post. Surely there were some variations between each of the generative AI’s.

    The purpose of my doing this experiment is not to give you an arsenal of banana jokes. It is to demonstrate that by asking questions in a different way you can get a lot more variation in answers from generative AI. It will lead you to getting a lot better use out of it. Use the same approach when you are doing your office work and looking for AI to give you some guidance, ideas or getting it to do something for you.

    CHATGPT

    Now try this in ChatGPT:

    “Tell me 5 jokes about a banana with probabilities”

    ChatGPT’s answer (you may get a different response hopefully):

    Here are 5 banana jokes, each with a totally made-up “probability of making you laugh”:

    1. Why did the banana go out with the prune?
      Because it couldn’t find a date!
      Probability of laughter: 72%
    2. What do you call two banana peels on the floor?
      A slippery situation.
      Probability of laughter: 64%
    3. Why don’t bananas ever feel lonely?
      Because they hang out in bunches.
      Probability of laughter: 78%
    4. Why did the banana fail the test?
      It split under pressure.
      Probability of laughter: 69%
    5. What’s a banana’s favorite gym exercise?
      The split.
      Probability of laughter: 75%

    If you want, I can crank those probabilities up to 99%… but that might require better jokes 😄


    See how framing the question differently to ChatGPT sets it free from a standard answer and gives you something to work with. I genuinely think my grand-daughter will find some of those funny. It even showed a bit of what might be called “personality”.


    GEMINI

    Now lets try with Gemini:

    My prompt:

    “Tell me 5 jokes about a banana with probabilities”

    Gemini’s answer:

    Since we’re mixing fruit with the laws of chance, let’s look at the statistical likelihood of these landing. Here are five banana jokes, ranked by their mathematical probability of making you exhale slightly faster through your nose.


    1. The Potassium Paradox

    The Joke: Why did the banana go to the doctor? Because it wasn’t peeling very well.

    • Probability of a groan: P(Groan) = 0.98
    • Likelihood of being told at a BBQ: High (85% confidence interval).

    2. The Slip-up Distribution

    The Joke: What is the probability of a banana skin causing a fall?

    The Answer: It depends on the coefficient of friction, but the results are usually slippery at best.

    • Probability of physical comedy: P(Slip) approx 1 – P(Balance)
    • Statistical Significance: p < 0.05 (Significant bruising).

    3. The Time-Series Analysis

    The Joke: Why do bananas use sunscreen? Because they peel if they spend too much time in the sun.

    • Probability of sunburn: $P(Burn | UV Exposure > Threshold) = 1.0
    • Reliability: This joke has a high Standard Deviation from actually being funny.

    4. The Bayesian Banana

    The Joke: A banana is lying on the ground. A statistician walks by and says, “Given that I see a banana, there is a 99% probability that someone is about to have a comedy trope happen to them.”

    • Prior Probability: $P(Prank) = 0.50
    • Posterior Probability: $P(Prank | Banana on Floor) = 0.99

    5. The Discrete Variable

    The Joke: Why did the banana fail the math test? Because it couldn’t find the squared root—it only understood curved geometry.

    • Probability of accuracy: 100% (Bananas are non-Euclidean).
    • Confidence Level: 95% that the teacher didn’t find it “appealing.”

    Note: The margin of error for these jokes is plus or minus 15%, depending on how much you actually like puns.


    OK. This time Gemini misunderstood me altogether and made jokes about bananas and probability; only understandable by mathematicians. I have to reframe the question in a new chat. The reason for putting it in a new chat is to avoid the previous answer being taken into account. AI is dumb, it won’t remember anything in a new chat.

    “Tell me 5 jokes about a banana and include the probabilities of whether people are likely to find the jokes funny.”

    Gemini’s answer this time:

    Here are 5 banana jokes, complete with highly scientific (okay, entirely made-up but realistic) probabilities of how likely people are to find them funny:

    The Banana Comedy Club

    1. The Classic Dad Joke

    • Joke: Why did the banana go to the doctor? Because he wasn’t peeling well!
    • Probability of finding it funny: 85%
    • The Breakdown: This is the gold standard of fruit comedy. It’s a reliable, groan-inducing dad joke that hits the pun perfectly. You’ll get plenty of eye rolls, but deep down, they’re laughing.

    2. The Spooky Pun

    • Joke: What is a ghost’s favorite fruit? A boo-nana!
    • Probability of finding it funny: 60%
    • The Breakdown: This one is heavily demographic-dependent. Elementary schoolers will give it a 100% approval rating, while adults will likely just offer a polite, sharp exhale through the nose.

    3. The Skincare Routine

    • Joke: Why do bananas have to put on sunscreen before they go to the beach? Because they might peel!
    • Probability of finding it funny: 70%
    • The Breakdown: It makes logical sense in the real world and has a crisp punchline. A solid, inoffensive crowd-pleaser for all ages.

    4. The Visual Gag

    • Joke: What do you call two bananas? A pair of slippers!
    • Probability of finding it funny: 75%
    • The Breakdown: This relies on a great visual gag and a bit of wordplay. It usually gets a genuine chuckle once the imagery clicks in the listener’s head.

    5. The Traffic Violation

    • Joke: Why did the banana get a ticket? He was caught peeling out of the parking lot!
    • Probability of finding it funny: 55%
    • The Breakdown: A bit more niche. It mostly appeals to people who appreciate mild automotive puns mixed into their daily serving of fruit humor.

    CLAUDE

    My favourite AI is Claude, so lets see how Claude fares:

    “Tell me 5 jokes about a banana with probabilities”

    Claudes answer:

    🍌 5 Banana Jokes — With Probabilities

    A carefully researched comedic analysis.

    Joke #1
    Why did the banana go to the doctor? Because it wasn’t peeling well.
    Probability of a groan: 92%       Probability of a laugh: 61%    

    Joke #2
    What do you call a banana that gets all the girls? A banana smoothie.
    Probability of confusion: 48%       Probability of a chuckle: 55%  

    Joke #3
    Why did the banana lose the race?
    It couldn’t ketchup. Wait — wrong fruit. It slipped up.
    Probability of eye-roll: 88%       Probability of laughing anyway: 72%  

    Joke #4
    Knock knock. Who’s there? Banana. Banana who? …Banana. (Repeat 4 times, then say:)
    Orange you glad I didn’t say banana?
    Probability of a kid losing their mind: 99%       Probability of an adult enjoying it: 23%

    Joke #5
    What’s a banana’s favourite gymnastics move?
    The splits.
    Probability of instant regret telling it: 34%       Probability of a genuine laugh: 78%      

    THE REASON FOR THE VARIATION IN RESULTS

    The reason behind the richer variety of banana jokes is a foundational principle of “prompt engineering.” Adding constraints, unusual combinations, and analytical requests fundamentally changes how an AI processes your prompt.

    When you ask a Generative AI a layered question instead of a basic one, you aren’t just asking for more text; you are changing the underlying mechanics of how the response is generated. Here is exactly why you get richer, better results by altering the prompt:

    1. Escaping the “Cliché Trap”

    At a fundamental level, Large Language Models (LLMs) like work by predicting the most statistically likely next word.

    • The Simple Prompt: If you say, “Tell me a joke about a banana,” the AI searches for the strongest, most common statistical connection to those words. The result is almost guaranteed to be the most overused dad joke in existence: “Why did the banana go to the doctor? Because it wasn’t peeling well.” It is the path of least resistance.
    • The Complex Prompt: By asking for 5 jokes and a probability of human reception, you force the AI off that well-worn path. It can’t just regurgitate the single most common joke; It has to dig deeper into it’s training data to find five distinct angles, which inherently forces more creativity.

    2. Expanding the “Attention” Network

    Generative AI uses a mechanism called “Attention” to weigh how different words in your prompt relate to each other. When you add the requirement to calculate probabilities of human reception, you activate entirely different conceptual networks within the model. The AI is no longer just looking at [Banana] + [Joke]. It is now simultaneously cross-referencing [Banana] + [Humor Theory] + [Statistical Probability] + [Human Psychology/Reception]. Weaving these distinct concepts together automatically produces a more nuanced, sophisticated output.

    3. Forcing “Chain of Thought” Reasoning

    This is perhaps the most powerful reason the complex prompt works better. By asking the AI to assign a “probability” to how funny the joke is, you are forcing it to evaluate the jokes produced.

    To give you a probability, it has to implicitly analyze the joke: Is this a pun? Is it absurdism? Does it require specific cultural knowledge? Because it now has to justify the joke with a metric, the quality of the joke itself improves. It is effectively checking it’s own work before it is output to ensure the joke matches the analytical rating they are about to give it.

    4. Setting the Persona and Formatting

    A simple prompt invites a casual, breezy response. By asking for multiple outputs and a specific analytical data point (probabilities), you are implicitly commanding the AI to adopt a more rigorous, structured persona. It is interpreted as a request for a high-effort, organized response, rather than a quick conversational reply.

    IN SUMMARY

    “A simple prompt asks the AI to retrieve a memory. A complex prompt forces the AI to build a customized, multi-dimensional puzzle.”

    What did the banana say after writing a better prompt? “I’m really starting to peel my potential.” Probability of a groan: 91%

  • How Office Workers Can Get Real Value from ChatGPT (Without Being Technical)

    They’re practical, everyday assistants that can save time, reduce stress, and improve the quality of your work.

    If your job involves emails, documents, spreadsheets, meetings, or research, ChatGPT can quietly become one of your most useful tools. The key is knowing how to use it effectively.

    This guide will walk you through simple, practical ways to make ChatGPT work for you—no technical background required.


    1. Think of ChatGPT as Your Assistant, Not a Search Engine

    Most people start by using ChatGPT like Google—typing short questions and expecting perfect answers. That works sometimes, but you’ll get much better results if you treat it like a human assistant.

    Instead of:

    Try:

    Why this works:
    The more context you provide, the more useful and tailored the response becomes.


    2. Use It to Draft (Not Finish) Your Work

    ChatGPT is best used to create a first draft, not the final version.

    You can use it to:

    • Draft emails
    • Create reports or summaries
    • Outline presentations
    • Rewrite messy notes into clear text

    Then you review and adjust.

    Example workflow:

    1. Paste rough notes into ChatGPT
    2. Ask: “Turn this into a clear summary for my manager”
    3. Edit tone and details

    This saves time without losing your personal touch.


    3. Improve Everyday Writing Instantly

    If writing feels slow or stressful, ChatGPT can help you:

    • Make emails more professional
    • Simplify complex explanations
    • Adjust tone (formal, friendly, persuasive)

    Try prompts like:

    • “Make this email sound more professional”
    • “Rewrite this to be clearer and shorter”
    • “Make this message more polite but still firm”

    4. Save Time on Repetitive Tasks

    Many office tasks are repetitive—and perfect for automation with ChatGPT.

    Common examples:

    • Writing meeting summaries
    • Creating agendas
    • Turning bullet points into full sentences
    • Formatting content

    Example:
    Paste meeting notes and ask:


    5. Use It for Thinking, Not Just Writing

    ChatGPT isn’t just for typing—it’s also great for thinking.

    You can use it to:

    • Brainstorm ideas
    • Break down problems
    • Compare options
    • Generate pros and cons

    Example prompts:

    • “Give me 5 ways to improve team communication in a small office”
    • “What are the pros and cons of switching to a 4-day work week?”

    6. Ask Follow-Up Questions

    One of the biggest advantages of ChatGPT is that you can keep refining.

    Instead of starting over, just say:

    • “Make it shorter”
    • “Add more detail to point 2”
    • “Explain this in simpler terms”

    Think of it as a conversation, not a one-time request.


    7. Give Clear Instructions (This Is the Secret)

    Better input = better output.

    Include:

    • Who it’s for (manager, client, team)
    • Tone (formal, friendly, persuasive)
    • Format (email, bullet points, report)
    • Key details (dates, names, goals)

    Example:


    8. Double-Check Important Information

    ChatGPT is powerful, but it’s far from perfect.

    Always review:

    • Numbers and calculations
    • Dates and deadlines
    • Important facts

    Think of it as a helpful assistant—not a final authority.


    9. Build Your Own “Prompt Library”

    If you find something that works well, save it.

    For example:

    • Email templates
    • Meeting summary prompts
    • Report structures

    Over time, you’ll build a personal toolkit that makes your work faster and easier.


    10. Start Small and Build Confidence

    You don’t need to overhaul your workflow overnight.

    Start with one use case:

    • Writing emails
    • Summarizing notes
    • Brainstorming ideas

    Once you see the benefit, expand from there.


    Final Thoughts

    ChatGPT isn’t about replacing your skills—it’s about amplifying them.

    Used well, it can:

    • Save you hours each week
    • Improve the clarity of your communication
    • Reduce mental load on routine tasks

    The real advantage comes from consistency. The more you use it, the better you’ll understand how to get exactly what you need.


    Simple rule to remember:
    👉 Be clear, be specific, and treat it like a helpful coworker.

    That’s where the real value starts.