Skip to content

Conversation

@IvanARashid
Copy link

@IvanARashid IvanARashid commented Oct 21, 2025

Describe the changes you have made in this PR

Implemented dictionary bounds and initial guesses in standardized algorithms where appropriate

Some issues remain when testing the "unreasonable" bounds.

I wasn't able to test the matlab code. Could one of you do it to check if it works?

Checklist

  • Self-review of changed code
  • Added automated tests where applicable
  • Update Docs & Guides

@oscarjalnefjord
Copy link
Collaborator

To resolve some of the failing tests:

  • PV_MUMC_biexp is failing due to a typo on line 80 in two_step_IVIM_fit.py where - should be +

  • There are a number of algorithms that accepts bounds but do not use them for all parameters or at every internal step. Should they be skipped from testing bounds or should the code contributions be modified? For example:

    • TCML_...SLS fails because it only uses the bound for Dp. This propagates to algorithms TCML...lsq_sls_* in various ways
    • IAR_LU_subtracted does not apply bounds for f
    • IAR_LU_segmented_3step does not apply bounds on f in intermediate step and therefore throws error in later step

The MATLAB-based algorithms are passing

@IvanARashid
Copy link
Author

To resolve some of the failing tests:

  • PV_MUMC_biexp is failing due to a typo on line 80 in two_step_IVIM_fit.py where - should be +

  • There are a number of algorithms that accepts bounds but do not use them for all parameters or at every internal step. Should they be skipped from testing bounds or should the code contributions be modified? For example:

    • TCML_...SLS fails because it only uses the bound for Dp. This propagates to algorithms TCML...lsq_sls_* in various ways
    • IAR_LU_subtracted does not apply bounds for f
    • IAR_LU_segmented_3step does not apply bounds on f in intermediate step and therefore throws error in later step

The MATLAB-based algorithms are passing

Thanks for checking this. As I've mentioned previously I think original code should only be adjusted by the authors. So in this testing, we seems to be able to find limitations in certain algorithms which in one way is good. What is not good is that these will continue to be viewed as fails, and if we continue to change the wrapper or other surrounding code, we won't know whether we are getting fails due to the problem with the algorithm, or problem with our own code. What are your thoughts on this?

@oscarjalnefjord
Copy link
Collaborator

To resolve some of the failing tests:

  • PV_MUMC_biexp is failing due to a typo on line 80 in two_step_IVIM_fit.py where - should be +

  • There are a number of algorithms that accepts bounds but do not use them for all parameters or at every internal step. Should they be skipped from testing bounds or should the code contributions be modified? For example:

    • TCML_...SLS fails because it only uses the bound for Dp. This propagates to algorithms TCML...lsq_sls_* in various ways
    • IAR_LU_subtracted does not apply bounds for f
    • IAR_LU_segmented_3step does not apply bounds on f in intermediate step and therefore throws error in later step

The MATLAB-based algorithms are passing

Thanks for checking this. As I've mentioned previously I think original code should only be adjusted by the authors. So in this testing, we seems to be able to find limitations in certain algorithms which in one way is good. What is not good is that these will continue to be viewed as fails, and if we continue to change the wrapper or other surrounding code, we won't know whether we are getting fails due to the problem with the algorithm, or problem with our own code. What are your thoughts on this?

This is kind of a unique situation. In the future, PRs with new algorithms will only be merged if they pass the tests. Now we had a bunch of algorithms before all tests were up and running... But I guess we can still ask authors to adjust their code.

The problem with bounds only applied to a subset of the model parameters is a bit tricky. One solution could be to throuw a warning and set the bounds to +/- inf for the parameters where bounds are not used. This can then be detected by the test to skip testing of the bound for the specific parameter. +/- inf should be accepted as bounds by most optimization algorithms.

@IvanARashid
Copy link
Author

To resolve some of the failing tests:

  • PV_MUMC_biexp is failing due to a typo on line 80 in two_step_IVIM_fit.py where - should be +

  • There are a number of algorithms that accepts bounds but do not use them for all parameters or at every internal step. Should they be skipped from testing bounds or should the code contributions be modified? For example:

    • TCML_...SLS fails because it only uses the bound for Dp. This propagates to algorithms TCML...lsq_sls_* in various ways
    • IAR_LU_subtracted does not apply bounds for f
    • IAR_LU_segmented_3step does not apply bounds on f in intermediate step and therefore throws error in later step

The MATLAB-based algorithms are passing

Thanks for checking this. As I've mentioned previously I think original code should only be adjusted by the authors. So in this testing, we seems to be able to find limitations in certain algorithms which in one way is good. What is not good is that these will continue to be viewed as fails, and if we continue to change the wrapper or other surrounding code, we won't know whether we are getting fails due to the problem with the algorithm, or problem with our own code. What are your thoughts on this?

This is kind of a unique situation. In the future, PRs with new algorithms will only be merged if they pass the tests. Now we had a bunch of algorithms before all tests were up and running... But I guess we can still ask authors to adjust their code.

The problem with bounds only applied to a subset of the model parameters is a bit tricky. One solution could be to throuw a warning and set the bounds to +/- inf for the parameters where bounds are not used. This can then be detected by the test to skip testing of the bound for the specific parameter. +/- inf should be accepted as bounds by most optimization algorithms.

You're making a good point regarding the bug fixing. We could fix the issue in PV's algorithm and see if we can get a thumbs up from her since it's such a small thing.

Regarding the two algorithms that are mine. The subtracted does not estimate f, but calculates it based on the two S0's estimations. So there are no bounds to be applied there. The same is true for the intermediate step in the 3step algorithm (although there is a final estimation of f in the 3rd step where bounds are applied). This is by design and if this conflicts with the testing, then they should be excluded yeah. Or there's perhaps some other way we could change the test pass/fail criteria? E.g. only fail if we get a completely right answer for all parameters, and not only if one of the parameters are outside the strict bounds.

self.bounds = bounds
#bounds=bounds
#self.bounds = bounds
self.bounds = ([self.bounds["D"][0], self.bounds["f"][0], self.bounds["Dp"][0], self.bounds["S0"][0]],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be in favour of letting self.bounds remain the dictionary and only translate it to algorithm-specific notation in the ivim_fit routine, as a temporary variable. This gives more consistency and allows changing bounds in standardized fashion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I also sent an e-mail, although I see OScar may have touched upon some of those topics.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So Pauliens code gives some of these errors that P0 is outside the bounds. Here is why:
1: Paulien does not use the user-specified P0, she calculates P0 herself. (src/original/PV_MUMC/two_step_IVIM_fit.py, e.g. line 79, 80)
2: She does this wrongly. She takes the mean of the bounds, but swaps he bounds. This means S0 initial guess is the mean of the Dt bounds and Dt initial guess is the mean of the S0 bounds.

79 boundsmonoexp = ([bounds[0][1], bounds[0][2]], [bounds[1][1], bounds[1][2]])
80 params, _ = curve_fit(monofit, high_b, high_dw_data, p0=[(bounds[1][1]-bounds[0][1])/2,(bounds[1][2]-bounds[0][2])/2], bounds=boundsmonoexp)

I do not know what to do here; it is source code, so we should not adapt it. But Paulien will also not really have time to fix this, I guess? @paulienvoorter ?

this does open a general discussion: what to do with faulty contributions? Should we at a point start trimming?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And for src/standardized/IAR_LU_segmented_3step.py, the inital guess changes half way the fit routine. If we take a look at src/original/IAR_LundUniversity/ivim_fit_method_segmented_3step.py and then particular line 83, we see you update the intial guess for f0. Now if tight bounds are set, this runs into problems. Maybe you can add a check to see whether this new estimate is within the predefined fit boundaries?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the TCML_TechnionIIT there are a lot of fails, but this seems related to the fact that they do not really use bounds (properly)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oliverchampion I dont think I was swapping the bounds, as I first pass Dt and then f (S0 bounds are not considered in the monoexponential fit in line 80). But Oscar is correct about the typo in line 80, which causes an incorrect initial guess.
I corrected this in #128

Let me know if I need to change this in other places as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants