Results for "environment representation"
Internal representation of the agent itself.
Modeling environment evolution in latent space.
Internal representation of environment layout.
Learned model of environment dynamics.
Automatically learning useful internal features (latent variables) that capture salient structure for downstream tasks.
Agent reasoning about future outcomes.
A learning paradigm where an agent interacts with an environment and learns to choose actions to maximize cumulative reward.
Maintaining two environments for instant rollback.
Artificial environment for training/testing agents.
RL using learned or known environment models.
Perceived actions an environment allows.
Structured graph encoding facts as entity–relation–entity triples.
Diffusion performed in latent space for efficiency.
Model that compresses input into latent space and reconstructs it.
Mathematical representation of friction forces.
Software simulating physical laws.
Space of all possible robot configurations.
Inferring the agent’s internal state from noisy sensor data.
Ability to replicate results given same code/data; harder in distributed training and nondeterministic ops.
Strategy mapping states to actions.
Continuous cycle of observation, reasoning, action, and feedback.
Separates planning from execution in agent architectures.
Simultaneous Localization and Mapping for robotics.
Interleaving reasoning and tool use.
AI systems that perceive and act in the physical world through sensors and actuators.
External sensing of surroundings (vision, audio, lidar).
RL without explicit dynamics model.
Finding routes from start to goal.
Planning via artificial force fields.
Detecting and avoiding obstacles.