Discussion summary ------------------ "How shall one include upcoming LHC data in global BSM parameter fits?" Specifically, we have discussed the (mutually related) questions 1) In what form should the experimental results be made available by the LHC collaborations? 2) How can we test models that have not been specifically analysed by the experimental collaborations? Regards Q 1), Kyle Cranmer, Max Baak et al. have proposed to publish RooFit/RooStats workspaces containing the dataset and the full probability model (PDFs), such that the likelihood function can be evaluated (for a given model) for any given point in parameter space. The likelihood function depends in general on parameters of interest and nuisance parameters. Workspaces also provide the necessary information to generate toy data for coverage tests, etc. However, the likelihood function refers to a specific model and cannot be used to re-interprete published data in terms of different new physics models without approximations. Thus, in addition to likelihood functions for specific models, we need a more model-independent way to publish experimental results. First, one would like to test models that may not have been analysed by the LHC collaborations, and secondly, the published likelihoods are not only model-specific, but also rely on theoretical tools (Monte Carlo programs etc.) that may improve with time. Thus it is desirable to present data in a way to would allow to redo entire the physics analysis with improved tools and/or re-interpret the data in terms of different models. The following options have been discussed: 1) Publication of measured data and estimated background in terms of “detector-unfolded” fiducial cross-section (ie. corrected for reconstruction effects). Pro: would allow theorists to test any model without having to know the specific details of LHC detector effeciencies; Con: detector-unfolding is model-dependent due to model-dependent reconstruction efficiencies; Con: selection cuts are tuned to specific model and might not be optimal for alternative models. 2) Publication of measured data and estimated background events without detector unfolding. Pro: Does not involve any model dependence; Con: Theorists need to rely on simplified detector simulations to test alternative models; Con: selection cuts are tuned to specific model and might not be optimal for alternative models. 3) Publication of experimental results in terms of simplified models. Pro: Should allow to test wider class of models with less bias; Con: mapping of specific new physics model onto (set of) simplified models requires approximations and might introduce systematic errors. (See http://www.lhcnewphysics.org for a list of simplified models.) Approaches like QUAERO and RECAST have not been discussed as they cannot be done outside the experimental collaborations. As all three approaches 1)-3) above introduce approximations, it would be desirable to compare those approaches for a number of selected test cases. To proceed, and to prepare a more quantitative discussion at the upcoming LPCC meeting, we propose to start working on the inclusion of workspaces into global fits. Max Baak has suggested to send around a dummy workspace with workspaces of squark/gluino (or m0/m12) mass plane plus instructions on how to use them in fitter programs. We furthermore propose to compare directly performing a likelihood fit for a specific model provided by the experiments with the approximative methods 1)-3) listed above. To proceed here we would have to consider a toy analysis, where the experiments generate events for e.g. a mSUGRA model and provide i) a workspace containing the dataset and the PDFs obtained assuming this specific model, and ii) a fit of the toy data to (a set of) simplified models. In order to use the results in terms of simplified models in an mSUGRA fit, we need to make additional assumptions and approximations; we would thus like to see how the results for the two different approaches compare. MK & PW, 6.12.2010