Results for "perception-action"
Software pipeline converting raw sensor data into structured representations.
Set of all actions available to the agent.
Continuous cycle of observation, reasoning, action, and feedback.
Acting to minimize surprise or free energy.
Field combining mechanics, control, perception, and AI to build autonomous machines.
Expected cumulative reward from a state or state-action pair.
Expected return of taking action in a state.
The field of building systems that perform tasks associated with human intelligence—perception, reasoning, language, planning, and decision-making—via algori...
Predicts next state given current state and action.
Formal framework for sequential decision-making under uncertainty.
System that independently pursues goals over time.
AI systems that perceive and act in the physical world through sensors and actuators.
Strategy mapping states to actions.
Fundamental recursive relationship defining optimal value functions.
Optimizing policies directly via gradient ascent on expected reward.
Balancing learning new behaviors vs exploiting known rewards.
Interleaving reasoning and tool use.
Simple agent responding directly to inputs.
Continuous loop adjusting actions based on state feedback.
Learning policies from expert demonstrations.
Learning action mapping directly from demonstrations.
Learning only from current policy’s data.
Temporal and pitch characteristics of speech.
Generates audio waveforms from spectrograms.
Devices measuring physical quantities (vision, lidar, force, IMU, etc.).
System-level design for general intelligence.
A learning paradigm where an agent interacts with an environment and learns to choose actions to maximize cumulative reward.
A high-priority instruction layer setting overarching behavior constraints for a chat model.
Methods for breaking goals into steps; can be classical (A*, STRIPS) or LLM-driven with tool calls.
Combines value estimation (critic) with policy learning (actor).