Shinros Blog

Field notes from inside real Support work. Prompt experiments, failure cases, workflow design, and how teams actually build trust in AI.

No client data. No hype. Just what holds up under pressure.

Transparency & Testing Ethics

All blog experiments and prompting tests are conducted independently. No client data is ever used in our posts or demonstrations.

When real-world experiences shape our insights, we recreate those experiments using synthetic or publicly available data. Never proprietary or sensitive material.

We believe transparency builds trust, and clear boundaries protect it.

Latest Posts

Three Days with AI: From Smart Homes to Red-Teaming Prototypes

Published August 11, 2025 · Tags: AIWorkflow, Automation, RedTeaming

Three days, one AI assistant, and a dozen unexpected outcomes. From automating my home, to unfamiliar food prep, to building a red-teaming prototype for testing AI safety, these rapid experiments show how versatile AI can be when you work alongside it with intention.

Read more →

Evaluating GPT-4o’s Reasoning Through Wordle and ReAct

Published July 22, 2025 · Tags: PromptEngineering, GPT-4o, Experiment

What happens when a language model takes on a logic puzzle like Wordle? This experiment runs GPT-4o through a ReAct scaffold and shows where its reasoning holds, and where its pattern-following snaps. The breaks matter. They explain why oversight and structure still decide whether AI helps or harms.

Read more →

Prompting for Moral Reasoning: A Trolley Test Across Contexts

Published July 02, 2025 · Tags: PromptEngineering, Ethics, GPT-4o, Experiment

How does long-term context change an AI’s moral judgment? We tested GPT-4o with a trolley problem under different identities and framing. You can watch it bend. Stakes shift when the model is allowed to build a memory of you.

Read more →

Reasoning, Actions, and Local Models – A ReAct Field Test

Published April 28, 2025 · Tags: PromptEngineering, LLMs, ReAct

ReAct looks great in papers. We wanted to know if it survives reality, especially on local models. This walkthrough covers setup, behavior under pressure, and what “reasoning” really means when the model is on your own hardware.

Read more →

Before the Beacon: Why Shinros Exists

Published April 21, 2025 · Tags: PromptEngineering, LLMs, Reflection

This is the why. The gap between “we rolled out AI” and “our team can actually use it.” What it felt like being in Support without a playbook, and why Shinros is built around trust, not novelty.

Read more →

Train your Support team to trust AI under pressure

Practical prompt training, real workflow fixes, zero fluff.

Book a Call