本文详解为何调用 mlflow.set_experiment() 会报连接拒绝错误,并手把手教你启动本地 mlflow 后端服务,完成实验创建、模型训练与指标记录的完整闭环。
MLflow 的跟踪(Tracking)功能依赖一个后端服务来持久化实验、运行、参数和指标。当你执行 mlflow.set_experiment(

✅ 正确做法:先启动 MLflow Tracking Server,再运行你的 Python 脚本。
在终端(Windows PowerShell / CMD / macOS/Linux Terminal)中执行以下命令:
mlflow server \ --host 127.0.0.1 \ --port 8080 \ --backend-store-uri sqlite:///mlflow.db \ --default-artifact-root ./mlruns
✅ 提示:确保已安装 MLflow(pip install mlflow)。启动成功后,终端将显示类似 Running the MLflow tracking server at http://127.0.0.1:8080 的提示,并自动打开 Web UI(浏览器访问 http://127.0.0.1:8080 即可查看实验仪表盘)。
确认服务已运行后,再执行如下完整示例(含实验设置、模型训练与日志记录):
import mlflow
from mlflow.models import infer_signature
import pandas as pd
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
# ✅ 确保服务已启动后再设置 URI 和实验
mlflow.set_tracking_uri("http://127.0.0.1:8080")
mlflow.set_experiment("MLflow Quickstart")
# 开始一次新运行
with mlflow.start_run():
# 加载数据
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(
iris.data, iris.target, test_size=0.2, random_state=42
)
# 训练模型
clf = LogisticRegression(max_iter=200)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
# 记录参数与指标
mlflow.log_param("model_type", "LogisticRegression")
mlflow.log_param("max_iter", 200)
mlflow.log_metric("accuracy", accuracy_score(y_test, y_pred))
mlflow.log_metric("precision", precision_score(y_test, y_pred, average="weighted"))
mlflow.log_metric("recall", recall_score(y_test, y_pred, average="weighted"))
mlflow.log_metric("f1", f1_score(y_test, y_pred, average="weighted"))
# 记录模型(自动推断 signature)
signature = infer_signature(X_train, clf.predict(X_train))
mlflow.sklearn.log_model(clf, "model", signature=signature)
print("✅ Run logged successfully! Check http://127.0.0.1:8080")至此,你已打通 MLflow 本地跟踪全流程:服务启动 → 实验定义 → 运行记录 → Web 可视化。后续可进一步探索模型注册(Model Registry)、项目打包(Projects)与部署(Models Serving)等高级能力。