Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
claudevs
Frequent Visitor

MLFlow - Problem with aliases and metrics

Our team is trying to use MLFlow to manage the models created in fabric, however this has been a hard task since many features of MLFlow Python library are apparently unavailable in Fabric. We tried to implement aliases in our models without any success, the same thing happened when we try to access metrics via models (not runs). Every time we instanciate a model only some data are available like model_name, version and tags, but we can´t access to params and metrcis asociated to that model.

 

I would appreciate any help, as I don't know if these issues are due to a limited implementation of MLFlow's capabilities in Fabric or incorrect use of the MLFlow API. If anyone could share some code that allows acess to model metrics and parameters using any type of model object (LoggedModel, ModelVersion, ModelInfo, etc) would be very appreciated.

1 ACCEPTED SOLUTION
bariscihan
Resolver II
Resolver II

What you’re seeing is expected MLflow behavior, and it can be confusing at first in Fabric.

Key point: in MLflow, metrics/params live on the Run, not on the Model Registry objects. A ModelVersion (or ModelInfo, LoggedModel, etc.) typically exposes registry metadata (name, version, tags, source, run_id), but not the run’s params/metrics directly. The right pattern is:

ModelVersion → run_id → client.get_run(run_id)

If aliases are failing in Fabric, it’s usually because the hosted registry endpoint doesn’t support the alias APIs (or the underlying MLflow server feature set differs). In that case, a practical workaround is to use model version tags as a “pseudo-alias” (e.g., champion=true, env=prod) and resolve your “production” version via tags.

References:

import mlflow
from mlflow.tracking import MlflowClient

client = MlflowClient()

model_name = "<YOUR_MODEL_NAME>"
model_version = "<YOUR_VERSION_NUMBER>"  # e.g. "3"

# 1) Registry -> ModelVersion
mv = client.get_model_version(name=model_name, version=model_version)
print("Model:", mv.name, "Version:", mv.version, "RunId:", mv.run_id)

# 2) Run -> params/metrics
run = client.get_run(mv.run_id)

print("\nParams:")
print(run.data.params)

print("\nMetrics:")
print(run.data.metrics)

# Optional: metric history (if you logged multiple values over time)
# hist = client.get_metric_history(mv.run_id, "accuracy")
# print(hist)

View solution in original post

5 REPLIES 5
v-karpurapud
Community Support
Community Support

Hi @claudevs 

We have not received a response from you regarding the query and were following up to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.

 

Thank You.

v-karpurapud
Community Support
Community Support

Hi @claudevs 

Thank you for posting your query in the Microsoft Fabric Community Forum, and thanks to @bariscihan for sharing valuable insights.

 

Could you please confirm if your query has been resolved by the provided solution?If you have any more questions, please let us know and we’ll be happy to help.

Regards,

Microsoft Fabric Community Support Team

 

bariscihan
Resolver II
Resolver II

What you’re seeing is expected MLflow behavior, and it can be confusing at first in Fabric.

Key point: in MLflow, metrics/params live on the Run, not on the Model Registry objects. A ModelVersion (or ModelInfo, LoggedModel, etc.) typically exposes registry metadata (name, version, tags, source, run_id), but not the run’s params/metrics directly. The right pattern is:

ModelVersion → run_id → client.get_run(run_id)

If aliases are failing in Fabric, it’s usually because the hosted registry endpoint doesn’t support the alias APIs (or the underlying MLflow server feature set differs). In that case, a practical workaround is to use model version tags as a “pseudo-alias” (e.g., champion=true, env=prod) and resolve your “production” version via tags.

References:

import mlflow
from mlflow.tracking import MlflowClient

client = MlflowClient()

model_name = "<YOUR_MODEL_NAME>"
model_version = "<YOUR_VERSION_NUMBER>"  # e.g. "3"

# 1) Registry -> ModelVersion
mv = client.get_model_version(name=model_name, version=model_version)
print("Model:", mv.name, "Version:", mv.version, "RunId:", mv.run_id)

# 2) Run -> params/metrics
run = client.get_run(mv.run_id)

print("\nParams:")
print(run.data.params)

print("\nMetrics:")
print(run.data.metrics)

# Optional: metric history (if you logged multiple values over time)
# hist = client.get_metric_history(mv.run_id, "accuracy")
# print(hist)

Thank you for your answer @bariscihan

At the end, your solution is part of how we use MLFlow, nevertheless I´m still finding out more missing features of MLFlow in fabric, although these features also can be replaced with other approaches, It would be awesome if they could be implemented in the future, as MLFLow is a powerful tool that helps a lot in the ML model lifecycle proccess

Thank you for your answer @bariscihan

At the end, your solution is part of how we use MLFlow, nevertheless I´m still finding out more missing features of MLFlow in fabric, although these features also can be replaced with other approaches, It would be awesome if they could be implemented in the future, as MLFLow is a powerful tool that helps a lot in the ML model lifecycle process

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

February Fabric Update Carousel

Fabric Monthly Update - February 2026

Check out the February 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.