I convert and inference Secure Diffusion on MacBook Professional M2, MacBook Air M2 and MacBook Air M1.
When they’re utilizing ml-stable-diffusion challenge to transform the Secure Diffusion Mannequin from PyTorch to Core ML, they could solely use CPU energy.
I examined the conversion velocity on macOS 14, when changing DreamShaper XL1.0, the velocity is M1 > M2 > Professional M2… M1 is the quickest one.
Then I attempted Real looking XL, the conversion velocity is M1 > Professional M2 > M2. M1 continues to be the winner.
I feel that does not make sense that I feel the velocity ought to be Professional M2 > M2 > M1!
Does anyone know what is the completely different from these three?