I will present a framework to analyse statistically-distributed data and extract physical parameters, from a Bayesian inference, at the edge of machine-learning. This affects the formulation of our traditional fits, the way statistical errors are computed and propagated, as well as model selection and estimates of systematic errors. My ambition is to define a consistent framework to put all these elements together in a more rigorous way, while allowing arbitrary generalisations of the current methods. The hypotheses made by existing methods will mostly be moved into the construction of a model, so that their validity can be evaluated as part of the model selection. I will present the construction of a few models for a realistic case with actual Lattice QCD data, and an implementation based on the general-purpose package PyMC. I will also comment on how this point of view can shed light on methods and tricks used in our more traditional approaches, and in particular the model averaging of uncorrelated fits. Finally, I will discuss the opportunities and challenges for this method.