Reproducibility has become a cornerstone of trustworthy research—but ask any researcher or data scientist, and they’ll tell you how painful it can be to reproduce even your own experiments from last month. Managing parameters, random seeds, and environments often eats up more time than the actual analysis.
This is where RexF steps in. RexF is a Python library designed to simplify reproducible research. Unlike other experiment tracking platforms that demand databases, servers, or complicated configs, RexF promises “from idea to insight in under 5 minutes, with zero configuration.”
🚀 What Makes RexF Different?
While popular tools like MLflow, Sacred, or Weights & Biases are powerful, they often require setup overhead. RexF stands out with its simplicity:
from rexf import experiment, run
@experiment
def my_research_function(learning_rate=0.01, batch_size=32):
accuracy = train_model(learning_rate, batch_size)
return {"accuracy": accuracy, "loss": 1 - accuracy}
# Run your experiment
run_id = run.single(my_research_function, learning_rate=0.005, batch_size=64)
That’s it. Behind the scenes, RexF captures:
- Parameters and hyperparameters
- Execution time and system environment
- Git commit info
- Random seeds
- Results in a local SQLite database
No configs. No servers. Just insights.
Smart Features That Researchers Love
1. Automated Parameter Exploration
RexF can run adaptive searches or grid sweeps without external tools:
run.auto_explore(
my_research_function,
strategy="adaptive",
budget=20,
optimization_target="accuracy"
)
2. Natural Language Queries
Want to find experiments with >90% accuracy? Just ask:
high_accuracy = run.find("accuracy > 0.9")
3. Insights & Recommendations
RexF doesn’t just log—it suggests what to try next.
suggestions = run.suggest(my_research_function, count=5)
Real-World Example: Monte Carlo π Estimation
To see RexF in action, here’s a classic Monte Carlo π estimation experiment:
@experiment
def estimate_pi(num_samples=10000):
...
return {"pi_estimate": pi_estimate, "error": error}
# Run experiments
run.single(estimate_pi, num_samples=50000)
RexF makes it easy to run variants, compare results, and automatically explore parameter space.
Dashboard and CLI Tools
RexF also ships with a web dashboard and CLI utilities:
rexf-analytics --summary
rexf-analytics --insights
rexf-analytics --dashboard
The dashboard provides live monitoring, while the CLI makes querying fast and lightweight.
Why RexF Matters
The reproducibility crisis in computational research is real. By automatically capturing code versions, environments, and seeds, RexF lowers the barrier to trustworthy, repeatable science.
It’s especially useful for:
- Researchers publishing papers
- Data scientists comparing models
- Students learning best practices early
- Teams collaborating on shared experiments
⚡ Getting Started in 30 Seconds
pip install rexf
from rexf import experiment, run
@experiment
def quick_start(x=2):
return {"result": x * 10}
run.single(quick_start, x=5)
That’s all it takes to start tracking experiments—no database, no config files.