An educational, accessible Hangman game suite powered by speech recognition. Players can guess letters or full words using their voice (or an on‑screen keyboard), with adaptive hints, difficulty levels, and post‑game feedback.
- Voice input (letters & words): Combines Microsoft Azure Cognitive Services Speech SDK with the Web Speech API for robust recognition.
- On‑screen keyboard: Optional manual input with clear visual feedback.
- Multiple game modes:
- Hangman 2: Letter‑by‑letter guessing with instant feedback.
- Hangman 3: Full‑word recognition with precision scoring.
- Adaptive hints: Images and text hints varying by difficulty.
- Progress & feedback: Post‑game feedback screen for quick insights.
- Kid‑friendly settings: Age groups, sound options, and word position settings.
- React 18 + TypeScript
- Vite 6 (dev/build/preview)
- Azure Cognitive Services Speech SDK (
microsoft-cognitiveservices-speech-sdk) - Web Speech API fallback
- React Router 7
- Tailwind CSS
- Testing: Vitest + @testing-library/react
- Linting: ESLint
- Prerequisites
- Node.js 18+ and npm
- An HTTPS‑capable browser (Chrome/Edge recommended) with microphone access
- Azure Speech resource (key + region) and a token service endpoint (see Environment)
- Install
npm install- Configure environment
Create a
.envfile in the project root:
# Base URL for your token service (must expose /api/get-speech-token)
VITE_BASE_URL=https://your-backend.example.com
# Optional: Custom Speech model endpoint ID
VITE_APP_SPEECH_ENDPOINT=your-custom-endpoint-id- Run
npm run start
# Vite dev server starts; open the printed local URL- Build & preview
npm run build
npm run preview- Tests & lint
npm test
npm run lintThe app does not store Azure keys in the client. Instead, it requests a short‑lived token from a backend endpoint and caches it in a cookie.
- Variable
VITE_BASE_URLmust point to a backend that exposesGET /api/get-speech-tokenand returns:{ "token": "<azure-speech-token>", "region": "<azure-region>" } - Optional
VITE_APP_SPEECH_ENDPOINTsets a Custom Speech endpoint ID for improved accuracy. - Token retrieval logic lives in
src/token_util.tsand is used by the hooks and main menu prefetch.
Routes are defined in src/routes.tsx and wrapped with GameProvider:
/→ Main menu/settings/:gameId→ Game settings per mode/word-selection/:gameId→ Word list selection (if applicable)/game/hangman,/game/hangman2,/game/hangman3→ Game screens/feedback→ Post‑game feedback page
The primary entry mounts AppRoutes in src/main.tsx.
context/GameContext.tsx— Central game state: current word, guesses, modal state, microphone enablement, navigation helpers, and aGameStatsManagerinstance.hooks/useVoiceRecognition.ts— Per‑letter recognition. Merges results from Azure Speech SDK and Web Speech API, normalizes many letter pronunciations (e.g., "why" → "y").hooks/useFullWordRecognition.ts— Single‑utterance, full‑word recognition via Azure Speech SDK.hooks/useMicrophone.ts— Microphone availability and permission prompts used on the main menu and in games.components/GameRelated/*— Game boards, keyboard UI, modals, illustrations, and hint display.pages/*— Menu, settings, word selection, and feedback screens.
| Mode | Input | Hints | Best For |
|---|---|---|---|
| Hangman 2 | Letter‑by‑letter (voice/keyboard) | Text + visuals | Younger learners / phonics |
| Hangman 3 | Full‑word (voice) | Timed + visuals | Older learners / advanced |
- Mic permission is required for voice features. The main menu shows microphone and Azure token service status indicators.
- Voice recognition works best over HTTPS contexts.
- Chrome or Edge on desktop recommended; Safari may require additional permission flows.
npm run start— Start Vite dev servernpm run build— Production build todist/npm run preview— Preview the production buildnpm test— Run unit tests with a UInpm run lint— Run ESLint across the project
src/main.tsx— App bootstrapsrc/routes.tsx— Routing andGameProvidersrc/context/GameContext.tsx— Shared game statesrc/hooks/useVoiceRecognition.ts— Letter recognition (Azure + Web Speech)src/hooks/useFullWordRecognition.ts— Full‑word recognition (Azure)src/token_util.ts— Token retrieval from backend
Vitest is configured with JSDOM and a global setup file:
- Config:
vite.config.js→testsection - Setup:
vitest.setup.js - Example test:
src/components/GameRelated/HangmanBoardV3.test.jsx
Run tests:
npm test- No mic detected: Ensure a physical microphone is available and selected by the OS.
- Permission denied: Click the mic status button in the main menu or allow mic permissions in the browser.
- Azure token offline: Verify
VITE_BASE_URLand that your backend returns{ token, region }. - Low accuracy for letters: Provide
VITE_APP_SPEECH_ENDPOINTif you have a Custom Speech model; otherwise default improves via phrase hints.
MIT License — see LICENSE.md.
This project depends on third-party libraries that are licensed separately from the project’s MIT license. See THIRD_PARTY_NOTICES.md.