- Training --> Especially for complex rhythms
- Testing --> To see if subjects can reproduce the rhythms
- Rescaling --> To test the hypothesis

- 17 beeps, which makes 8 long/short pairs.
- 4 "simple" ratios: 2:1, 3:1, 4:1, 3:2
- 3 "hard, complex" ratios: 2.72:1, 3.33:1, 1.82:1

Subjects hear the beeps and see a display showing the target ratio. They have two wooden blocks (sometimes referred to as "keys") in front of them, which they tap on using one finger from each hand. While they are hearing the beeps, they may tap along on the blocks if they like (why might this be important?). When the beeps stop, the screen flashes the word and they have to start tapping out the rhythm they just heard. After 17 taps (= 8 long/short patterns) They see the word . They then see a display which told them how well or badly they have done. This includes text, telling them what their average ratio was, how close they were to the target, etc. Although the researchers were mainly interested in what ratios the subjects produced, they wanted them to tap at about the same tempo as the stimulus. The "error of last tap" figure tells them whether they took too long or were too quick overall. This is one "trial".

We saw above what a trial is. A single trial is repeated several times in a row. This is called a "phase". In order to test a single ratio, 6 phases are needed, which together are called a "block". This is because we want to train, test and rescale, all on the same ratio. But testing and training use different stimuli! So, we have to train on the testing phase too!! A block looks like this:

- Phase 1: Run 12 identical training trials
- Phase 2: Run 3 test trials. This data will not be used, it is just to make sure that subjects are trained on the testing procedure as well! (Imagine we set a practice examination before the real one. Maybe this is not a bad idea!).
- Phase 3: Train some more (12 more trials).
- Phase 4: Test for 12 trials. This is the first usable data in the block!
- Phase 5: Maybe testing changed subjects performance. Better train some more (12 trials)
- Phase 6: rescaling. 12 Trials, which give us the second body of data from this block.

Consider one subject (JCL). She is subjected to a block on the ratio 3:2, then a block on 2:1, then 3:2, then 2:1, then 3:2, then 2:1 again. At this stage, we might agree that the experimenters have learned everything they are ever going to learn about 2:1 and 3:2 for JCL. Not so. The subject does 6 blocks on a single complex ratio (1.82:1), and then the experimenters want to know whether doing the hard task changed her performance on the easy ratios. So she does 2 more blocks on each of 3:2 and 2:1. No wonder they have to pay their subjects!

First off, each tap was done by two hands, so they had to see whether the two hands made taps at the same time. In most cases, the taps were within 15 ms (0.015 of a second) of one another, so they thought it would be ok to use the average of the two hands to get a single measurement for a tap. (What if this had not been the case?)

Sample results for a single subject:

The x-axis shows the test phase results for one subject and 3 ratios. The y-axis shows performance during the rescaling. Note that if the subjects were "perfect", the data points would all lie on the diagonal line. They are not perfect in either testing or rescaling. The fact that most points are near the diagonal for the simple ratios shows that they are pretty good on these. But look at the spread and location of the points for 2.72! During testing, this subject is not doing too badly, as his points are around 2.72 on the x-axis, even though there is a lot more variation than for the simple ratios. But during rescaling, they are all around 2, not 2.72!

Compare this subject with the subject shown in Figure 4. Here the results for the simple ratios are similar, and in testing, again, the subject produces the complex ratios fairly well, about 2.72. But in rescaling this time, they all lie around 3. This should give you a hint as to why the authors are showing the results separately for each subject.

- Means, that is, average values for ratios produced by subjects
- Variances, that is, how scattered are the responses? Are subjects consistent? In general, greater VARIANCE is associated with greater uncertainty in how to do a task and greater task DIFFICULTY.

When doing a statistical test, for example looking for a difference in two averages, the results are said to be SIGNIFICANT or NON_SIGNIFICANT. This is NOT the same thing as IMPORTANT/UNIMPORTANT! A large difference will usually be "significant", but if it is predictable and boring, it may be unimportant. Conversely, a non-significant result may still point to something interesting, and the lack of significance may simply come from using a weak test or not having quite enough data.

- Simple ratios were not produced exactly. Rather, subjects produced smaller ratios than the targets (a "bias"). This is an unexpected and unexplained result!
- Complex ratios are also not produced accurately. The bias is greater than for simple ratios. It is sometimes, but not always in the same direction as for the simple ratios (i.e. a smaller value than the target).
- Variance was also higher for complex than simple ratios. This suggests that the task is "harder" for complex targets.

No improvement observed for simple ratios!

- Rescaled ratios are always different from the test ratios for complex patterns. This is both significant and important.
- Rescaled ratios are slightly larger than the test ratios for simple patterns. This is not significant (statistically) but may well be important.
- Again, variance is greater for complex ratios than simple ratios.

For both simple and complex ratios, rescaling performance is always imperfect.

- A clock-counter model contains a mechanism (unspecified here) which produces an isochronous pulse train, i.e. a series of evenly spaced pulses, or beats.
- In principle, this model could handle a complex ratio like 2.72:1 as well. But that would require counting out 68 beats followed by 25 beats. So some limitation on either the speed of the beats or the number of beats in a pattern is needed.
- Coupled oscillators could form the basis for a clock-counter model of this sort. They would prefer simple ratios, because oscillators couple in ratios like 2:1, 3:1 etc.
- A clock-counter model may be able to give an account of how
trained musicians produce polyrhythms. Musicians report using a
"cross-rhythm", which consists of a single pulse train fast enough
that every accent in the polyrhythm falls on a pulse in the underlying
train.
## The rest of the story?

They carry out further experiments which attempt to produce evidence in favor of a clock-counter model over competing models such as GMP/PS. The final experiment (expt 4) addresses the problem of training. In the above experiment, remember, subjects showed little or no improvement on complex rhythms, despite considerable practice. In the final experiment, a single subject is trained much more intensively. Although he shows some improvement on the test phase (reproducing a complex rhythm), his performance after rescaling never improves.There are many unsolved questions remaining. Here are just some of them:

- Why do people consistently produce ratios like 1.9:1 instead of 2:1?
- If we prefer simple ratios in rhythmic tasks, what about non-rhythmic tasks like reaching and grabbing, or catching a baseball? Is the timing mechanism here completely different?
- Does musical training, and training in producing complex rhythms, make a large difference to performance?
- (Hard question) How does careful experimental work like this, where subjects are performing an artificially simple task in a laboratory, relate to their performance in the real world, where demands are much more complex?
- What is the connection (if any) between simple tapping rhythms and musical or speech rhythms? Or walking, running etc?
- Is the distinction between "simple" rhythms and "complex" rhythms cut and dried? What about 3:2? 5:1? 11:1? (This might be a good starting point for your own experimental design!)