Building Trainable Terminal Games: Introducing tty_games Engine for Python
Share this article
Reinventing Terminal Gaming with Machine Learning
The tty_games engine brings a fresh approach to building 2D terminal-based games with built-in reinforcement learning capabilities. This Python framework provides abstract interfaces that let developers focus on game logic while handling rendering, event processing, and AI training workflows.
Core Architecture: Game and TrainableGame Interfaces
The engine offers two primary interfaces:
- Basic Game Interface:
class Game:
def update_game(self): # Game state updates per time step
def update_grid(self): # Grid rendering logic
def on_key_press(self): # Keyboard input handling
- TrainableGame Extension:
class TrainableGame(Game):
def play_step(self, action): # Returns (reward, game_over, score)
def reset(self): # State initialization
def get_state(self): # Environment state representation
Developers implement these methods to define game mechanics while the engine handles the game loop timing through updates per second (UPS) configuration.
Snake Game Implementation Showcase
The repository includes a complete Snake implementation demonstrating key concepts:
Game Logic:
def update_game(self):
self.update_food()
self.update_body()
self.check_if_dead()
def update_grid(self):
# Render body segments
for x,y in self.body[:-1]:
self.grid.grid[x,y] = '⬤'
# Render food and special items
self.grid.grid[food_x, food_y] = '@'
self.grid.grid[special_x, special_y] = '🐭'
Reinforcement Learning Integration:
def play_step(self, action):
self.on_key_press(self.map_action(action))
score_before = self.score
self.update_game()
if game_over:
reward = -10
elif self.score > score_before:
reward = 10
else:
reward = 0
return reward, game_over, self.score
def get_state(self):
# 12-dimensional state vector:
return np.array([
dir_up, dir_down, dir_left, dir_right,
food_up, food_down, food_left, food_right,
danger_up, danger_down, danger_left, danger_right
], dtype=int)
Training and Deployment Workflow
The framework provides streamlined methods for agent training and visualization:
pysnake = Snake((25, 25), walls=False, ups=25)
pysnake.train() # Start training process
pysnake.watch_agent_play() # Observe trained agent
Technical Advantages
- Unicode Support: Uses characters like
⬤,@, and🐭for rich terminal visuals - Custom State Vectors: Flexible state representation for RL agents
- Reward Engineering: Configurable reward functions for training
- Randomized Initialization: Dynamic game reset for robust training
"The abstraction of game loop management and RL integration allows developers to prototype trainable games in hours rather than days" - Project Philosophy
Potential Applications
- Reinforcement learning education
- Algorithm testing environments
- Retro game recreations
- AI competition platforms
Source: tty_games GitHub Repository