is there a better way for them to develop a list? or is any list developed by more than one person subject to that problem?
I agree it's difficult. Their method is:
We asked our contributors to nominate their top ten records, CDs, downloads and streams of the year, then we counted up the votes.
I'm guessing the points are therefore 1=10, 2=9, 3=8, and so forth, which seems all very
fair.
The problem is that "fair" will tend to result in "good" rather than "best". For "best" I think a method should reward genuine enthusiasm over gentle praise. For instance, if they graded them in a Formula One type way:
1=25, 2=18, 3=15, 4=12, 5=10, 6=8, 7=6, 8=4, 9=2, 10=1
Then it would take
six (rather than four) halfway "pretty good"s to triumph over two "absolutely the best".