Results for "model-based"
Model-Based RL
AdvancedRL using learned or known environment models.
Model-based reinforcement learning is like having a map while exploring a new city. Instead of wandering around aimlessly, you can look at the map to plan your route and make better decisions about where to go next. In this type of learning, an AI agent first learns how the environment works—like...
Expected return of taking action in a state.
Extending agents with long-term memory stores.
Optimizing policies directly via gradient ascent on expected reward.
Coordination arising without explicit programming.
Learning from data generated by a different policy.
Categorizing AI applications by impact and regulatory risk.
Neural networks that operate on graph-structured data by propagating information along edges.
GNN framework where nodes iteratively exchange and aggregate messages from neighbors.
Pixel motion estimation between frames.
Repeating temporal patterns.
Maintaining two environments for instant rollback.
System that independently pursues goals over time.
Interleaving reasoning and tool use.
Sum of independent variables converges to normal distribution.
Updated belief after observing data.
Belief before observing data.
Optimization under uncertainty.
European regulation classifying AI systems by risk.
AI used in sensitive domains requiring compliance.
Central log of AI-related risks.
Assigning AI costs to business units.
Storing results to reduce compute.
AI systems that perceive and act in the physical world through sensors and actuators.
Devices measuring physical quantities (vision, lidar, force, IMU, etc.).
Internal sensing of joint positions, velocities, and forces.
External sensing of surroundings (vision, audio, lidar).
Control using real-time sensor feedback.
Using output to adjust future inputs.
Classical controller balancing responsiveness and stability.
Computing end-effector position from joint angles.