Arie Bodek and Cynthia Keppel
to questions follow below
Thank you for taking the time to carefully read our
proposal in advance, and to give us time to consider your concerns. Here
are some answers to your questions (below). I would be happy to talk with
you as well...you know where to find me! Also, Arie will be here Friday
evening and so we could both talk with you on the weekend or Monday if
you wanted to do that.
Questions and Answers
1) Obviously, this proposal is related
to similar experiments already approved or even completed. In particular,
some results from 99-118 and how they relate to the proposed program would
be interesting. Is there any overlap (or could there be) with PR03-103?
(J. Arrington, spokesperson)
Experiment E99-118 was approved and completed to
test the "HERMES effect", which was observed at very low x and Q2, and
therefore also higher W2 than this proposal. The W2 of interest to E99-118
is significantly above the resonance region (~10 GeV2), and the new E03-110
experiemnt is, in fact, complementary to E99-118, adding the resonance
region which has not been measured to the DIS data set. Some cross section
data (not L/T separable) were obtained in the resonance region during E99-118
to assist in radiative corrections, however, and these data can be used
in some cases for systematic cross checks for the proposed experiment and
to add to the numbers of epsilon points in a few cases. In summary, E03-110
is complementary to E99-118 in that it covers the high x region and a much
larger range in Q2, and the existence of E99-118 data will be helpful to
the analysis of E03-110. The hydrogen experiment E94-110 (done) and deuterium
experiment E02-109 (approved) provide data which overlaps this proposal
kinematically. The hydrogen and deuterium data will be used as input to
several independent models of the resonance region (which can then be used
to predict the vector part of neutrino cross sections on H and D via Clebsch
Gordon coefficients. However, since high statistics neutrino data will
be taken on nuclear targets (C, Water, Fe and liquid Argon), one needs
corresponding data on nuclear targets with electrons to accurately model
A-dependent effects. Note that the Delta region, which has both I=1/2 and
I=3/2 final states, and the quasi-elastic region at I=1/2, are quite different
from the DIS region where many resonances contribute. We will also use
data from E02-109 which covers the quasi-elastic region only, for
radiative corrections. PR03-103 (John Arrington) covers the quasielastic,
resonance, and DIS regions. However, it nonetheless has little overlap
with this proposal because:
(a) The data is almost entirely on H, 2H, 3He, and
(b) There are no L/T separations.
(c) There are a little data planned for heavier
nuclei, but only at one energy and Q^2 value, and so not useful for either
L/T or the proposed duality studies.
We also note that John Arrington is a collaborator
on this proposal.
2) In a similar vein, could you address the
question "what do we learn from this wide variety of nuclear targets
(some quite unconventional, for sure) that we cannot learn from a careful
comparison of just a few 'pure' species, e.g. C, Al and maybe Fe?" My naive
assumption would be that R_A is approximately equal to R_D (plus Fermi
smearing), and if there is an EMC-type effect, it should be quantifiable
by studying a few denser/heavier nuclei. I'm worried about the overhead
of all these target changes.
Target changes in Hall C take 10 minutes at most,
and typically far less (depending on how far away targets are on the ladder
from one another). This is not a major source of overhead. R_A could be
quite different from R_D in the resonance region at large x. Initial data
from Hall C suggest a definite EMC-type effect in the resonance region,
however it has not yet been possible to study this in detail. Moreover,
the bulk of the proposed data are only with C and Fe/Cu targets, for which
high statistics neutrino data at Fermilab will be forthcoming. The Al running
is not with the same statistics and is only for empty target subtraction
(see Answer 4, below). These are the targets with which most of the proposed
physics will be done, so we agree with you. There is small overhead for
the other targets to mimic water and liquid Argon, for which data is taken
mostly as a service to the neutrino community). One could probably save
a day of running if one eliminates these targets. However, it is beneficial
to check the extrapolation from the other targets to these. There is an
existing $200M+ neutrino detector with a water target that is going to
be used as the target for a $200M+ neutrino beam from J-PARC. The
understanding of how to model the cross-section on oxygen will be crucial
to the experiment for which these detectors are being built, and it is
currently unclear that enough can be understood with neutrino interactions
alone because of the technological difficulties of building a fully-active
(i.e. non water Cerenkov) water target for a neutrino experiment.
3) In general, I believe it would be good
to have a clear and detailed expose of the INTRINSIC Physics value of your
proposed measurements (what will we learn about nuclei, QCD, nuclear
corrections, the EMC effect.that we don't know already) as opposed to just
the neutrino-related justification. Can you show us a compariso of
the expected data (with statistical and systematic errors) on plot
like Fig. 1? Maybe even with expected error bars from 99-118?
Duality at high Q2 is mostly related to the fact
that the momentum sum rule (momentum carried by quarks is about 50%) in
QCD is approximately true (but has Q2 dependent corrections). Similarly,
the Gottfried sum rule [Integral of (Fproton- Fneutron)/x], is also approximately
true. However, all QCD-based sum rules break down below Q2=1. For example,
the integral of F2p which comes from the momentum sum rule, including the
elastic peak, is 1 at Q2=0 and is 0.24 at high Q2. Therefore, this kind
of duality is only approximate and breaks down totally at low Q2. On the
other hand, there are several Adler sum rules (and Adler-like sum rules
by Gilman in electron scattering for the vector part only) based on current
algebra which are exact at all Q2. These sum rules, which include the elastic
peak and the resonance region, are typically integrals of W2nubar-P
minus W2nu-P. They are number sum rules rather than momentum sum rules.
The most familiar Adler sum rule is equivalent to 2u-d = 1 at high Q2 (as
measured separately in the vector and axial vector parts of the scattering).
It is also equal to 1 at Q2=0, since the vector part of the inelastic cross
section goes to zero and it is just the vector form factor. The vector
part of the sum rule can be tested using electron scattering data on neutrons
and protons from Q2=0 to high Q2 with precise data in the resonance region,to
see how the Adler sum rule works at all Q2, which at high Q2. Both the
Adler Sum rule and the momentum sum rule are supposed to work. The models
fit to the electron scattering data are used to predict the vector part
of the neutrino data on nucleons and also nuclei. Note that the sum rule
is only for W_2 (so it needs R). The future neutrino data on nuclei can
be used to test the axial vector part from Q2=0 to high Q2. For the axial
vector, F2axial is not equal to zero at Q2=0 so the sum rule is completely
independent and works differently. By investigating duality from the point
of view of QCD sum rules and current Algebra sum rules at all Q2, with
both axial and vector interactions, one gains a better understanding of
duality. The delta resonance and the quasielastic region are important,
since these are definite isospin final states that contribute in a special
way to the Adler sum rule. Also, as mentioned above, an EMC-type effect
has been observed in the resonance region in Hall C. This is unpredicted.
While one might expect from duality, surprising enough at lower Q, that
the structure functions in the resonance region average to the DIS, it
is an entirely different thing for whatever nuclear effect it is that modifies
the pdf's in DIS to be the same in the resonance regime. This observation
needs to be observed now more carefully, and the first step in checking
an effect in the F2 structure function in a regime where the longitudinal
effects are not small is to accurately extract this structure function
from the cross section - that it, to perform an L/T separation. We also
note that this is complementary once again to the existing DIS measurements,
in that the larger x and lower Q regime remain essentially EMC untested.
4) Regarding the Al target: Do I read your table II correctly
that seems to say that you'll spend a lot more time on Al than all other
target types? I realize this is part of E02-109, but couldn't one shorten
this time by using thicker Al targets?
Yes and No. The ratio shown in Table II is
the ratio to D2 if one wanted to get the same statistics as D2 for the
Al target using a thicker Dummy (thicker than planned by E02-109). This
does not mean that it will be run that long since it is only to be used
for empty target subtraction. We have updated the proposal to clarify
this. Please see our updated version at: http://www.pas.rochester.edu/~bodek/jlab/Rnuclear-E03-110.pdf
As shown in the Table, in the E03-110 proposal, E02-109 plans to spend
a total of 60 hours (2.5 days ) out of 13 days on Dummy running. We plan
to improve on E02-109 and propose that they use a thicker (0.005 rl) target
with the same geometry as the empty target, but thicker such that it is
the same radiation length as the D and H target (thus making the radiative
corrections and geometry identical to the case of empty target when it
has H or D inside of it). The thicker Dummy has already been built and
utilized this summer for Hall C Experiment E00-002 (F2 at Low Q2). This
will save on running time for E02-110 and lead to a better measurement
of the empty target rate, at shorter time. If E02-109 accepts our
proposal, the 60 hours of running with the thicker dummy target can
be compared to the 73 hours run with deuterium. This means that Dummy statistics
will be about 1/6 of the statistics with deuterium. Although this is much
better than with their thin dummy run, it will not provide good data for
E03-110 on Aluminum for L/T separation. It will be excellent
for dummy target subtraction as mentioned above), and will provide some
additional data with Aluminum with 1/6 of the statistics. Note that with
the thicker dummy target, not only does one one improve on the dummy statistics,
but the systematic error from empty target subtraction is smaller since
the radiative corrections cancel out. The spokespersons for E02-109 have
accepted our proposal. This kind of empty target has been a standard at
SLAC for many years and should be a standard for all Jlab empty target
experiments in the future. If one uses the old thin dummy target, than
one must apply a correction to account for the difference in the radiative
correction as has been done in Hall C previously.
For E03-110 to get good data with Al, we would need not the thick 0.005
rl target but a much thicker 0.06 rl target (a factor of 11 thicker) to
get good statistics with Aluminum. The radiative corrections difference
between the very thin dummy and this very thick target will be very large
and it would not be as suitable for empty target subtractions. Therefore,
rather than use aluminum we use a 0.06 rl Carbon target which is of greater
interest to neutrino experiments.
5) I also would like to get a clearer
idea of the value of the proposed data in relationship to the neutrino
experiments. Do you need the measured R values as INPUT to analyze nu experiment?
In that case, can you show the impact of these proposed data on nu data
(how much does their systematic error due to lack of knowledge of R shrink)?
Or do you argue that these data can be COMPARED with nu ones? In
that case, can you give us an idea of what this comparison would look like
(what quantities can one extract or tests apply? With what confidence level?)
Some of it was discussed briefly in the answer to
question (3) above. It is probably best to read this and the associated
reference and then talk to Arie Bodek on the phone for more details (585-275-4344)
(lets set a time by Email - email@example.com
).. Basically, future precision neutrino oscillations experiments would
rely on this data for precise determination of the vector structure functions.
However, one can not just substitute this F2 and R from neutrino experiments.
One must use either a quark model or a resonance model. The vector structure
functions from electron scattering, within a particular model are input
to model scattering for electron and neutrino scattering. Electron
scattering data can only predict neutrino data within a specific model
because the couplings of electrons and neutrinos to various nuclear states
(e.g. quarks) are different. For example, at high Q2, a combined analysis
of electron and neutrino data on nuclear targets and nucleons is used to
get parton distribution functions. If one scatters from quarks, the final
state has a well defined isosopin and the prediction can be made, assuming
that nuclear effects are the same for vector and axial. Vector and axial
are assumed to be the same. Rvector and Raxial are assumed to be the same
(but it must fail at low Q2 because F2 vector must go to zero and F2 axial
does not, so for one R is zero and for the other R is is infinity). So
the higher twist in vector and axial are different. In the other extreme
case, quasielastic scattering, the vector form factors for Gep, Gem, Gem,
Gmn in a nuclear medium must be used as input to quasielasstic neutrino
data. Then, a comparison with the Q2 dependence of neutrino data is used
to determine the unknown axial form factor in a nuclear medium. However,
the final state in known to be a nucleon, I=1/2. Linear combinations of
Gep, Gen, Gmp and Gmn come into play in the vector form factor for neutrinos
from CVC. In addition, even at high Q2, in the quasielastic limit, local
duality as we know it in electron scattering is clearly not valid in the
neutrino case for the following reasons. In electron scattering Gmn and
Gmp are both non zero at high Q2. When neutrinos scatter from a proton,
they produce a negative muon and a neutron. Antineutrinos cannot scatter
quasielastically from a proton, because they produce a positive muon and
a neutral Delta resonance. The converse is true for antineutrinos. So local
duality, if valid for neutrinos, must include both the quasielastsic
part and the Delta, even for the vector form factors. So, in general, in
neutrino scattering we have I=1/2 and I=3/2 final states. The relative
contributions for F2 and R of each final state can be described within
a certain model (say resonance and continuum model) which is fit
to electron scattering data on H and D, with nuclear effects added on.
Then CVC is used to predict neutrino data on nuclei. Then the model
which is fit to electron data is tested to see if it satisfies the Adler
Sum rule. We expect high statistics neutrino data to become available
in the next few years. This means that there could be about a million events
for low energies (over a range of energies). The vector prediction from
electron scattering is used as input, and the data are used to extract
the axial contribution. The axial contribution is then tested for the Adler
sum rule. Note that "high statistics" for neutrinos means of order a million
fully reconstructed events on a particular multi-ton fully active target.
Statistics to separate different effects will be limited and these experiments
are also limited by various experimental effects, for example the difficulty
in separating soft muons from soft pions in a dense target. In order to
make these tests, then, the input from electron scattering for the vector
part of the cross-section and for model building and testing is crucial.
For neutrino oscillations experiments, not only do we need to know the
cross sections, but also the final states (which are used in event reconstruction).
Therefore, this experiment is only the first phase. What is also needed
are final states in nuclei which we plan to get in the future by collaborating
with Hall B physicists as part of a long range program.
Date: Tue, 10 Jun 2003 12:36:32 -0400
From: Sebastian Kuhn <firstname.lastname@example.org>
To: Cynthia Keppel <email@example.com>
Cc: F. Klein <firstname.lastname@example.org>
Subject: Re: response to PAC reader questions
Hi Thia (and Arie),
thanks for your very detailed answers to my questions. I will ponder
them some more and look at the updated version of your proposal. Unfortunately,
it may be
hard to get together to discuss these issues in person before the PAC
meeting due to all the other events going on this week (which I am heavily
However, I'll be certain to try to contact one of you if there are
remaining questions after your presentation.
I had a quick look at some of your slides. I guess (to reiterate my
main points) what I really would like to see are "predicted pseudo-data"
existing (preliminary) results from SLAC and JLab, including 99-118.
I realize that the kinematic region is different, but it is much easier
to visualize the
impact these new data would have if they could be plotted vs. some
reasonable model predictions for R (maybe even several) in nuclei. Correspondingly,
would also be nice to see a plot of some quantities extracted from
neutrino scattering (e.g. sum rules, moments, duality tests,...) with the
systematic error that one would get WITH and WITHOUT the knowledge these
new data will provide. I realize that all of this hangs together and each
piece of information
is valuable (for better models etc.), but it's always better to be
quantitative (or at least illustrative).
ANSWER to Email 2
It would be easier to discuss these things on the phone or in person.
(585-244-5617 home, or 585-275-4344 office, or let me know when/where
to call you.
I have updated my presentation to address some of the issues in the
shortest way I could do it, but of course, now it is too long. See
Please look at these with the following things in mind.
A. Needs of next generation neutrino oscilations programs in the USA
NuMI and Japan.
1. Understand low energy neutrino reactions to the level of 2%
in order to measure the mixing angles (is it 1.0 for muon to tau, or is
it 0.95? etc.
2. Understanding of both cross sections and final states to measure
th off-diagonal muon neutrino to electron neutrino (of order 0.1%).
Where are we now.
1. Low energy cross sections are know to about 40% and there is disagreement
2. Neutrino fluxes in those experiments were known to 10% to 20%.
3. Even the simple measurement of the axial form factor in quasielastic
required a lot of information from electron scattering.
4. Nuclear effects are large.
What do we need.
1. More input from electron scattering (as pointed out by E03-110 collaboration)
2. New Fermilab experiment (to be done at CERN) to measure particle
production in proton-Be collisions so that the neutrino fluxes
can be calculated to 2%.
3. New muon monitors in future neutrino beams to monitor neutrino fluxes.
4. New near detector (e.g. 280 meters)neutrino experiments to measure
axial cross sections (with input of vector cross sections from electron
And in Japan, they are also planning to build a 1 Killoton not so near
detector with a similar construction to SuperK to look at neutrino interactions
At high energies, it took a lot of investment in experiment and in theory
to finally produce PDFs in LO, NLO, NNLO. We are talking about many years
and MRS, CTEQ, GRV, theoretical work and electron, muon , neutrino, colliders
etc. to produce PDFs which are the basis of the Physics at the Tevatron
and the LHC. One could not just the cross sections. One needed
universal PDFs that describe electron-hadron/nucleus, muon-hadron/nucleus,
neutrino-hadron/nucleus, proton-antiproton collisions, jets etc. Therefore,
one needed to understand the Physics and have a theoretical framework.
One could not just parametrize different sets of data with arbitrary parameters.
None of this kind of work has been done for the low energy region, which
is relevant for the next generation 10 year program in neutrino physics.
When this energy range was investigated with poor statistics experiments
years ago, agreement between data and theory to the level of 40% was considered
good. The data was never good enough to investigate the Adler sum
rules, or do L/T separation for the various resonances. Neither was DIS
and polarized target experiments good enough to investigate the high Q2
Bjorken Sum rule or the Jaffe Sum rule or the Gottfried Sum rule.
Many years later, the Bjorken sum rule is now used to extract Alpha_S and
the Gottried sum rule to extract dbar-ubar, and the Jaffe sum rule is known
not to be valid.
Simple quark models can describe the resonance region to about 40% (since
they do not include the effect of the meson cloud. They cannot describe
nuclear effects in the resonance region. The same was true of PDFS in the
old days, they needed to be measured, and were not predicted by theory.
Once measured, the various sum rules were tested. Here, one can tune the
parameters of resonance models with new precise electroproduction data
at Jlab, which can measure L, T and nuclear effects in L and T for the
first time. Once the model is tuned to agree with electroproduction, it
can be used to constrain the vector part of the interaction. Probably,
it can be used to constrain neutrino cross sections from current 20% levels
to 5% level. Once new neutrino data becomes available, the axial
part will be known better and we hope to reach our goal of 2%.