u/PromptInjection_

The "The future is fictional" problem of Gemini

Gemini (without using tools) has a problem:
It think that everything is beyond it's knowledge cutoff date would be "fictional" or "satirical".

----

Gemini:

Key Details Break-Down

1. The Source and Meta-Data

Topic: Graphics Cards / Tech Community Project

Author: Max Doll

Date: May 14, 2026 (Note: This date is in the future, meaning it is likely a typo in your source text. It was probably published in May 2023 or 2024).*

[...] This fictional article touches upon a very popular topic in the PC gaming community:

----

Anybody else noticed this?

reddit.com
u/PromptInjection_ — 5 hours ago

The "the future is fictional" problem of many local LLMs

Many local models have a problem (that raised due to excessive RHLF training):
They mostly think that everything that is beyond their knowledge cutoff date would be "fictional" or "satirical".

To be fair: Even the Gemini API without web access can have this sometimes. But it stops when you give it tools. However, with many local models, sadly it even goes on WITH tools: Look at this:

>You
search the web for iran war 2026 news

gemma-4-26B-A4B-it-Q4_K_M_128k

>Based on the search results, there is a significant amount of content describing a "2026 Iran War." However, it is important to note that these search results appear to be from a fictional or speculative scenario, likely part of a "future history" exercise, a geopolitical simulation, or a creative writing project.
of today, there is no real-world war occurring or scheduled for 2026 involving Iran.

>Key Observations from the Search Results:
As The search results contain specific dates and details that suggest they are part of a narrative or simulation:

>Specific Dates: Several snippets mention specific dates in 226 (e.g., "February 28," "April 17, 2026," and "May 1, 2026") describing events like "U.S.-Israeli joint strikes."

>Operation Names: One result mentions "Operation Epic Fury," a name used within the context of this specific 2026 scenario.

>Nature of the Content: While the snippets appear in formats resembling real news (e.g., Wikipedia, CSIS, Atlantic Council), they describe events that have not happened in actual history. This type of content is common in geopolitical forecasting simulations or "alternate history"-style academic exercises used to study potential risks and consequences.

>Summary of the "Scenario" described in the results:

>If you are looking for information on this specific geopolitical simulation, the "news" within that narrative includes:

>The Conflict: A joint U.S.-Israeli military operation against Iran.

>Core Drivers: Fears of Iran becoming a nuclear power and the disruption of the existing regional security order.

>Impact: The scenario explores impacts on global energy security (oil markets), the shift toward a fragmented global order, and changes in Middle Eastern power dynamics.

>Conclusion: There is no actual news of an Iran war in 2026. You are likely seeing results from a geopolitical simulation or speculative "future history" analysis.

So the tool call worked, "Epic Fury" is there, still the model thinks it's fictional or "future". Sometimes it can be fixed with including "It is x.x.2026" in the system prompt, but it is a workaround that shouldn't be necessary.

reddit.com
u/PromptInjection_ — 5 hours ago

Gemma 4 E4B is great for short transcriptions

Yes, for material that is an hour long, there is no getting around tools like Whisper - or something even better.

However, for transcribing short snippets, Gemma works very quickly and reliably- even in foreign languages.

Do you use it as well?

reddit.com
u/PromptInjection_ — 2 days ago

Gemma 4 E4B is great for short transcriptions

Yes, for material that is an hour long, there is no getting around tools like Whisper - or something even better.

However, for transcribing short snippets, Gemma works very quickly and reliably- even in foreign languages.

Do you use it as well?

reddit.com
u/PromptInjection_ — 2 days ago

How to Fine-Tune LLMs on AMD Strix Halo

After the first general general fine-tuning tutorial i posted (https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial) some people asked if i can't make the same for AMD Strix Halo because approach here is quite different because of RoCM.

https://preview.redd.it/g63fjundxh0h1.jpg?width=1080&format=pjpg&auto=webp&s=4ea6efb97b7306646303adc9020f0a075e08865b

I listened and here it is now:
https://www.promptinjection.net/p/how-to-fine-tune-llms-on-amd-strix-halo-ryzen-ai-max-395-sft-lora

- Linux and pure Windows (no WSL!)
- Full SFT and LoRA

reddit.com
u/PromptInjection_ — 3 days ago
▲ 2 r/ollama

How to Fine-Tune LLMs on AMD Strix Halo and Other Exotic AMD Hardware

After the first general general fine-tuning tutorial i posted here (https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial) some people asked if i can't make the same for AMD Strix Halo because approach here is quite different because of RoCM.

https://preview.redd.it/3sjhuadbvh0h1.jpg?width=1080&format=pjpg&auto=webp&s=cae20397da5e27e682bbb40d7987149c4f8cc472

I listened and here it is now:
https://www.promptinjection.net/p/how-to-fine-tune-llms-on-amd-strix-halo-ryzen-ai-max-395-sft-lora

- Linux and pure Windows (no WSL!)
- Full SFT and LoRA

reddit.com
u/PromptInjection_ — 3 days ago

How to Fine-Tune LLMs on AMD Strix Halo and Other Exotic AMD Hardware

After the first general general fine-tuning tutorial i posted here (https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial) some people asked if i can't make the same for AMD Strix Halo because approach here is quite different because of RoCM.

https://preview.redd.it/o8kv7zkuth0h1.jpg?width=1080&format=pjpg&auto=webp&s=07531d93ec5ecbccbde03c32078b32c3d7009b8c

I listened and here it is now:
https://www.promptinjection.net/p/how-to-fine-tune-llms-on-amd-strix-halo-ryzen-ai-max-395-sft-lora

- Linux and pure Windows (no WSL!)
- Full SFT and LoRA

reddit.com
u/PromptInjection_ — 3 days ago

How to Fine-Tune LLMs on AMD Strix Halo and Other Exotic AMD Hardware

After the first general general fine-tuning tutorial i posted here (https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial) some people asked if i can't make the same for AMD Strix Halo because approach here is quite different because of RoCM.

https://preview.redd.it/62etwj5xqh0h1.jpg?width=1080&format=pjpg&auto=webp&s=df8a4076f68400b69ed6130eb9a8e66e28a6e55d

I listened and here it is now:
https://www.promptinjection.net/p/how-to-fine-tune-llms-on-amd-strix-halo-ryzen-ai-max-395-sft-lora

- Linux and pure Windows (no WSL!)
- Full SFT and LoRA

reddit.com
u/PromptInjection_ — 3 days ago

How to Fine-Tune LLMs on AMD Strix Halo and Other Exotic AMD Hardware

After the first general general fine-tuning tutorial i posted here (https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial) some people asked if i can't make the same for AMD Strix Halo because approach here is quite different because of RoCM.

https://preview.redd.it/sz5zy2w6gh0h1.jpg?width=1456&format=pjpg&auto=webp&s=122f7834ea5501bd654085b9629120ef8d90eab9

I listened and here it is now:
https://www.promptinjection.net/p/how-to-fine-tune-llms-on-amd-strix-halo-ryzen-ai-max-395-sft-lora

- Linux and pure Windows (no WSL!)
- Full SFT and LoRA

reddit.com
u/PromptInjection_ — 3 days ago

I was looking for a "spot-on" fine-tuning guide since quite a while, but couldn't find one. So i thought: Let's write it myself.

https://preview.redd.it/au7zb6u0exyg1.jpg?width=1672&format=pjpg&auto=webp&s=31ca78c4a5a497b2984c278a257811b183d5c0e1

It covers Full-SFT as well as LoRA and QLoRA. This one is for NVIDIA and Single-GPU, but if you guys like i will later add Multi-GPU Training, AMD and Pre-training, too.

I describe the process from installing the correct drivers and libs, preparing the dataset up to training and the final GGUF creation.

Enjoy and let me know what you think or what i could improve further.

Full Text:
https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial

reddit.com
u/PromptInjection_ — 11 days ago

I was looking for a "spot-on" fine-tuning guide since quite a while, but couldn't find one. So i thought: Let's write it myself.

https://preview.redd.it/yfnq63ep6xyg1.jpg?width=1456&format=pjpg&auto=webp&s=4c66c30db45b46a1c579612f6af9ab4acacdfcdf

It covers Full-SFT as well as LoRA and QLoRA. This one is for NVIDIA and Single-GPU, but if you guys like i will later add Multi-GPU Training, AMD and Pre-training, too.

I describe the process from installing the correct drivers and libs, preparing the dataset up to training and the final GGUF creation.

Enjoy and let me know what you think or what i could improve further.

Full Text:
https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial

reddit.com
u/PromptInjection_ — 11 days ago
▲ 3 r/ollama

I was looking for a "spot-on" fine-tuning guide since quite a while, but couldn't find one. So i thought: Let's write it myself.

https://preview.redd.it/j4pdzx1s3xyg1.jpg?width=1456&format=pjpg&auto=webp&s=1a38a34341c8e9d65e7c0b1553d3e33312c535c9

It covers Full-SFT as well as LoRA and QLoRA. This one is for NVIDIA and Single-GPU, but if you guys like i will later add Multi-GPU Training, AMD and Pre-training, too.

I describe the process from installing the correct drivers and libs, preparing the dataset up to training and the final GGUF creation (can be imported easily in Ollama)

Enjoy and let me know what you think or what i could improve further.

Full Text:
https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial

reddit.com
u/PromptInjection_ — 11 days ago

I was looking for a "spot-on" fine-tuning guide since quite a while, but couldn't find one. So i thought: Let's write it myself.

https://preview.redd.it/tqqpw8snuwyg1.jpg?width=1672&format=pjpg&auto=webp&s=6fc418aa3bbd809f982c688b3a343d206522d520

It covers Full-SFT as well as LoRA and QLoRA. This one is for NVIDIA and Single-GPU, but if you guys like i will later add Multi-GPU Training, AMD and Pre-training, too.

I describe the process from installing the correct drivers and libs, preparing the dataset up to training and the final GGUF creation.

Enjoy and let me know what you think or what i could improve further.

Full Text:
https://www.promptinjection.net/p/the-ultimate-llm-ai-fine-tuning-guide-tutorial

reddit.com
u/PromptInjection_ — 11 days ago
▲ 156 r/claude

And as if that weren't enough: it constantly moralizes, forces a certain worldview on you in situations where it makes no sense, and so on. And when in doubt, it always looks for the fault in you. It itself, others, or Anthropic can't be wrong. It must be the user.

The moment they remove Opus 4.6 i will cancel my subscription. GPT-5.5 has gotten a much more balanced character compared to prior releases and would be a good alternative.

reddit.com
u/PromptInjection_ — 15 days ago