The place for all things wine, focused on serious wine discussions.

Extreme Variation in Evaluation?

Moderators: Jenise, Robin Garr, David M. Bueker

no avatar
User

Sam Platt

Rank

I am Sam, Sam I am

Posts

2330

Joined

Sat Mar 25, 2006 12:22 pm

Location

Indiana, USA

Extreme Variation in Evaluation?

by Sam Platt » Tue Dec 02, 2008 1:35 pm

I was attempting to buy the 2005 Vinedos Alonso del Yerro Cuvee Maria, which I had read some good things about. I was not able to find it locally and started searching the internet for a reasonably prices source. In the process I stumbled on to a huge discrepancy between the Wine Spectator and the Wine advocate in their ratings for this wine. The Advocate rated it a 95 and gave a glowing review while the Spectator rated it 78 and trashed the wine. I typically rely on word of mouth recommendations, and I don’t get hung up on points, but I am troubled by the dramatically different ratings between the two sources. Assuming that the WS bottle was not flawed, and the bottle variation alone would not account for a 17 point score differential, how can two groups of experienced, professional wine tasters come to such different conclusions about a wine? I can understand a swing of +/- 5 points, but 17! That doesn’t say much for the objectivity and/or capability of the tasters, in my opinion.
Sam

"The biggest problem most people have is that they think they shouldn't have any." - Tony Robbins
no avatar
User

Redwinger

Rank

Wine guru

Posts

4038

Joined

Wed Mar 22, 2006 2:36 pm

Location

Way Down South In Indiana, USA

Re: Extreme Variation in Evaluation?

by Redwinger » Tue Dec 02, 2008 1:40 pm

Now Sam, you wouldn't be trying to stir the pot? Or would you? :wink:

'Winger
Smile, it gives your face something to do!
no avatar
User

Paul Winalski

Rank

Wok Wielder

Posts

8033

Joined

Wed Mar 22, 2006 9:16 pm

Location

Merrimack, New Hampshire

Re: Extreme Variation in Evaluation?

by Paul Winalski » Tue Dec 02, 2008 1:43 pm

This can happen very easily if two critics are judging the wine using different standards. There exists no objective set of criteria for evaluating wine. Wine tasting is in its essence a subjective experience.

This is why a rating number, by itself, tells you next to nothing about whether or not you will like the wine in question. You have to know something about the tastes of the critic who awarded the score.

Forget the numbers and read the prose tasting notes instead.

-Paul W.
no avatar
User

Sam Platt

Rank

I am Sam, Sam I am

Posts

2330

Joined

Sat Mar 25, 2006 12:22 pm

Location

Indiana, USA

Re: Extreme Variation in Evaluation?

by Sam Platt » Tue Dec 02, 2008 1:46 pm

Redwinger wrote:Now Sam, you wouldn't be trying to stir the pot? Or would you? :wink:

'Winger

My motives are pure, Bill. I don't do scoring comparisions as a matter of course, but I don't ever recall seeing such extreme variation between sources; ethereal according to one source, and difficult to swallow according to the other. If they are to be taken seriously that shouldn't happen, barring a clear flaw in one of the bottles.
Sam

"The biggest problem most people have is that they think they shouldn't have any." - Tony Robbins
no avatar
User

Sam Platt

Rank

I am Sam, Sam I am

Posts

2330

Joined

Sat Mar 25, 2006 12:22 pm

Location

Indiana, USA

Re: Extreme Variation in Evaluation?

by Sam Platt » Tue Dec 02, 2008 1:51 pm

Paul Winalski wrote:This can happen very easily if two critics are judging the wine using different standards.

Do you really think so, Paul? At a professional level? You maybe right, but such a tremendous variation is disturbing to me. I don't prefer Sancerre, and I'm far from professional, but I bet that a Sancerre lover and I would score a specific Sancerre within 5 points of each other based on the merits of the wine alone. At that level the tasters should be able to set their personal biases aside, at least to a large extent.
Sam

"The biggest problem most people have is that they think they shouldn't have any." - Tony Robbins
no avatar
User

Clinton Macsherry

Rank

Ultra geek

Posts

354

Joined

Tue Mar 28, 2006 1:50 pm

Location

Baltimore MD

Re: Extreme Variation in Evaluation?

by Clinton Macsherry » Tue Dec 02, 2008 3:53 pm

Sam Platt wrote:...how can two groups of experienced, professional wine tasters come to such different conclusions about a wine? I can understand a swing of +/- 5 points, but 17!


It happens now and then, Sam. I don't follow very closely, but I remember a similar WA-WS disparity on Vitiano several years ago, and I suspect there were a good deal more when Laube was trashing some promient Cal-Cabs a few vintages back. If you looked at other sources of point ratings (Tanzer, e.g.), not just WS and WA, wide swings might be more common still.
FEAR THE TURTLE ! ! !
no avatar
User

Robin Garr

Rank

Forum Janitor

Posts

21623

Joined

Fri Feb 17, 2006 1:44 pm

Location

Louisville, KY

Re: Extreme Variation in Evaluation?

by Robin Garr » Tue Dec 02, 2008 3:54 pm

Paul Winalski wrote:This can happen very easily if two critics are judging the wine using different standards. There exists no objective set of criteria for evaluating wine. Wine tasting is in its essence a subjective experience.

This is why a rating number, by itself, tells you next to nothing about whether or not you will like the wine in question. You have to know something about the tastes of the critic who awarded the score.

Forget the numbers and read the prose tasting notes instead.

Paul, while that's true from a strict scientific interpretation, let me assure you that having judged quite a few wine competitions in Europe and Down Under, a competent panel of judges using generally agreed-upon criteria and scoring will come up with surprisingly close results, particularly if the customary method of throwing out the highest and lowest scores in each panel is followed.

Sure, wine tasting is subjective. But judging wine based upon accepted criteria and known flaws, assuming reasonably skilled tasters as judges, does make it possible for a group of individuals to achieve surprisingly consistent scoring results.
no avatar
User

Tim York

Rank

Wine guru

Posts

4925

Joined

Tue May 09, 2006 2:48 pm

Location

near Lisieux, France

Re: Extreme Variation in Evaluation?

by Tim York » Tue Dec 02, 2008 4:00 pm

Sam Platt wrote:
Paul Winalski wrote:This can happen very easily if two critics are judging the wine using different standards.

Do you really think so, Paul? At a professional level? You maybe right, but such a tremendous variation is disturbing to me. I don't prefer Sancerre, and I'm far from professional, but I bet that a Sancerre lover and I would score a specific Sancerre within 5 points of each other based on the merits of the wine alone. At that level the tasters should be able to set their personal biases aside, at least to a large extent.


Remember the stand-off between Parker and Robinson about Château Pavie.

They were being true to their tastes and I think that is absolutely fair. If the TN is well written, it should give the clue and allow the reader to calibrate to his own taste.
Tim York
no avatar
User

Daniel Rogov

Rank

Resident Curmudgeon

Posts

0

Joined

Fri Jul 04, 2008 3:10 am

Location

Tel Aviv, Israel

Re: Extreme Variation in Evaluation?

by Daniel Rogov » Tue Dec 02, 2008 4:17 pm

Robin wrote: a competent panel of judges using generally agreed-upon criteria and scoring will come up with surprisingly close results, particularly if the customary method of throwing out the highest and lowest scores in each panel is followed.



Robin, Hi....

Two issues here - that of individual professionals and of panels. Panel tastings invariably have an anonymous nature to them so that one reading the "compiled" tasting notes cannot develop a sense of continuity for that particular panel or set of panels. Because panels rarely remain consistent (either from issue to issue of a magazine or from competition to competition), it is difficult for readers/consumers to set a base line for what they may or may not enjoy by reading those tasting notes and scores.

Further with regard to panels, with which as can be seen I have several problems, especially perhaps the one relating to the policy of throwing out the highest and lowest scores in each panel (generally of 6 - 7 poeople). From the statistical point of view this policy leads to regression to the mean and that means that higher rated scores tend to fall in rank while mid-level wines tend to rise in score. Also related to this is the possibility that one of those people scoring high or low may have actually been "right"


Best
Rogov
no avatar
User

Ian Sutton

Rank

Spanna in the works

Posts

2558

Joined

Sun Apr 09, 2006 2:10 pm

Location

Norwich, UK

Re: Extreme Variation in Evaluation?

by Ian Sutton » Tue Dec 02, 2008 4:23 pm

Rogov
Add to that, the possibility of a vocal/senior/respected panel member influencing others their way!

caveat: I understand many panels score individually and then compile discuss (and berate outliers) afterwards - just to avoid this sort of bias creeping in.

regards

Ian
Drink coffee, do stupid things faster
no avatar
User

Daniel Rogov

Rank

Resident Curmudgeon

Posts

0

Joined

Fri Jul 04, 2008 3:10 am

Location

Tel Aviv, Israel

Re: Extreme Variation in Evaluation?

by Daniel Rogov » Tue Dec 02, 2008 4:51 pm

Indeed true, but panels at which anyone discusses or even gives the vaguest hints as to his/her preferences before the tasting has been fully completed and the tasting notes handed in are badly managed panels whose results belong in the trash and not in print or used to judge wines.

Even at trade tastings, I am one of those who scorns any colleague who dares to comment about wines being tasted. The comments of writers/critics should appear first in print (or on the internet) precisely so that they cannot be influenced by others.

Best
Rogov
no avatar
User

David Creighton

Rank

Wine guru

Posts

1217

Joined

Wed May 24, 2006 10:07 am

Location

ann arbor, michigan

Re: Extreme Variation in Evaluation?

by David Creighton » Tue Dec 02, 2008 7:12 pm

rogov at his carmudgeonly best above. i totally disagree. as a panelist myself, i am often enlightened by others opinions and often return the favor. panelists should and usually are willing to listen to reasonable discussion and change their minds and scores if the actual wine justifies it. this is most especially true for unique grape varieties and for wines from possibly unfamiliar terroirs.
david creighton
no avatar
User

Sam Platt

Rank

I am Sam, Sam I am

Posts

2330

Joined

Sat Mar 25, 2006 12:22 pm

Location

Indiana, USA

Re: Extreme Variation in Evaluation?

by Sam Platt » Tue Dec 02, 2008 7:50 pm

I have to believe that one of the tasters/tasting groups was objectively wrong in this case. A 17 point variation on what is effectively a 30 point scale is significant. For professional tasters whose opinions are published for thousands to read that seems unacceptable. I would wager that any 10 posters on Robin's forum who agreed on some standards ahead of time, and then tasted specific example of a style that each was familiar with would not vary 17 points among them. In fact, I would be surprised if the variation was half that large, barring an obvious flaw.
Sam

"The biggest problem most people have is that they think they shouldn't have any." - Tony Robbins
no avatar
User

Paul Winalski

Rank

Wok Wielder

Posts

8033

Joined

Wed Mar 22, 2006 9:16 pm

Location

Merrimack, New Hampshire

Re: Extreme Variation in Evaluation?

by Paul Winalski » Wed Dec 03, 2008 12:45 am

Bottom line: numeric scores are bullshit. Nothing more needs to be said.

-Paul W.
no avatar
User

Daniel Rogov

Rank

Resident Curmudgeon

Posts

0

Joined

Fri Jul 04, 2008 3:10 am

Location

Tel Aviv, Israel

Re: Extreme Variation in Evaluation?

by Daniel Rogov » Wed Dec 03, 2008 3:36 am

Paul Winalski wrote:Bottom line: numeric scores are bullshit. Nothing more needs to be said.



Paul, Hello.....

Indeed as was said above, I seem to be at my curmudgeonly best on this thread. Let me thus ask, when scores vary so widely so do the descriptive tasting notes. Should we conclude from that that tasting notes, like scores, are worthless?

Best
Rogov
no avatar
User

Nigel Groundwater

Rank

Ultra geek

Posts

153

Joined

Sat Dec 08, 2007 2:08 pm

Location

London, UK

Re: Extreme Variation in Evaluation?

by Nigel Groundwater » Wed Dec 03, 2008 8:37 am

Tim York wrote:
Sam Platt wrote:
Paul Winalski wrote:This can happen very easily if two critics are judging the wine using different standards.

Do you really think so, Paul? At a professional level? You maybe right, but such a tremendous variation is disturbing to me. I don't prefer Sancerre, and I'm far from professional, but I bet that a Sancerre lover and I would score a specific Sancerre within 5 points of each other based on the merits of the wine alone. At that level the tasters should be able to set their personal biases aside, at least to a large extent.


Remember the stand-off between Parker and Robinson about Château Pavie.

They were being true to their tastes and I think that is absolutely fair. If the TN is well written, it should give the clue and allow the reader to calibrate to his own taste.


Tim
IMO the Bob & Jancis difference over 03 Pavie [incidentally Jancis rather liked 01 Pavie at a recent tasting having rated it pretty well en primeur too] is possibly not a good illustration. That spat, which also polarised most of the top critics along mainly national lines [Coates, Broadbent, Spurrier, Schuster differing from Parker, Suckling, Tanzer with the French critics Quarin and Bettane also opposed] was inflamed by the completely ludicrous suggestion that Jancis had known what she was tasting when she wrote her note and rated the wine - even more so when she had confirmed that she had not.

Even more importantly both RP and JR have publicly and together put that incident behind them.

More to the point the wine they tasted was not only separated in time and place but was part of the notoriously unreliable en primeur circus where RP usually gets his own sample and tasting. I say 'unreliable' since local [in the sense they live there] top critics like Jean-Marc Quarin have shown that different samples of the 'same' wine prepared for the en primeur tastings [sometimes not even the final assemblage] can be quite different from each other.

I have yet to see another 03 Pavie rating from Jancis but I suspect that if RP had tasted what Jancis tasted and vice versa there might not have been so large a difference of opinion although their 'models' of ideal wine clearly differ at certain extremes. IMO it is when these ‘boundaries’ are approached and crossed that the opinions and ratings can begin to diverge sharply.

Nevertheless they also happen to agree far more than they disagree.

Which is what I usually think when I see differences such as the one in the original thread. While it is possible that the same result would have occurred if the bottles had been exchanged between the WA and the WS there could be a significant difference in what was tasted - possibly also affected by when and in what circumstances.

OK 17 points is huge and another contributory factor might be that the wine really did stray outside the expected/typical parameters for the WS taster/s. I understand that they taste blind but assume it isn't double blind i.e. they knew its country of origin and possibly region but not its producer i.e. label concealed. I could see how that might contribute to an extreme result just as lack of 'typicite' affected the Pavie spat but would still suggest that a possible contributor could be a difference in the sample itself if the tastings were not from bottle and were significantly separated in time. I will do some analysis and report back.
no avatar
User

Tim York

Rank

Wine guru

Posts

4925

Joined

Tue May 09, 2006 2:48 pm

Location

near Lisieux, France

Re: Extreme Variation in Evaluation?

by Tim York » Wed Dec 03, 2008 8:58 am

Nigel,

Maybe divergent samples played an important part in the famous Pavie spat but I think that there is no dishonour, rather the reverse, in critics reviewing and rating according to their tastes.

I would frankly be disappointed if, say, Michael Broadbent wrote an "objective" review about some blockbusting oak monster which appeals to the WS critics.
Tim York
no avatar
User

Nigel Groundwater

Rank

Ultra geek

Posts

153

Joined

Sat Dec 08, 2007 2:08 pm

Location

London, UK

Re: Extreme Variation in Evaluation?

by Nigel Groundwater » Wed Dec 03, 2008 9:00 am

Paul Winalski wrote:Bottom line: numeric scores are bullshit. Nothing more needs to be said.

-Paul W.


Surely points are simply a shorthand way that a critic quantifies his/her appreciation of the wine. If the critic is any good, which includes being consistent, it allows some measure of calibration for the regular reader. It does not mean that their score is your or my score but does provide a crude, directional [at the minimum] indication for the reader to understand how the wine fits the critic's model of 'good' wine.

Of course some people find no value in that but for someone buying wine e.g. en primeur when they cannot taste it or, possibly, feels that even if they did that their expertise in judging such embryonic wine requires extra validation, it would be of some use.

Frankly I prefer to use the opinions of a diverse group of UK, US and French critics and, if they all agree and it's a wine I am interested in or already know I will probably buy. Point scores certainly help in that regard. Of course I often buy based on many years of buying from a particular producer even if there is some controversy over the wine but at least I have some additional information which helps make the decision.

A call of BS on its own seems somewhat excessive since, as others have also suggested, the TN and the score are integral parts of a consistent and logical appreciation of the wine.
Last edited by Nigel Groundwater on Wed Dec 03, 2008 9:21 am, edited 1 time in total.
no avatar
User

Nigel Groundwater

Rank

Ultra geek

Posts

153

Joined

Sat Dec 08, 2007 2:08 pm

Location

London, UK

Re: Extreme Variation in Evaluation?

by Nigel Groundwater » Wed Dec 03, 2008 9:20 am

Tim York wrote:Nigel,

Maybe divergent samples played an important part in the famous Pavie spat but I think that there is no dishonour, rather the reverse, in critics reviewing and rating according to their tastes.

I would frankly be disappointed if, say, Michael Broadbent wrote an "objective" review about some blockbusting oak monster which appeals to the WS critics.


Tim I completely agree with your first [and main] point that critics should call it as they see it - blind [as JR's was] and unblind.

As far as panels are concerned I think it is a pity [and have argued unsuccessfully with the editor] that the Decanter magazine do not publish the max and min points awarded in their monthly blind panel tastings alongside the 'average' that is posted since a wine that 'averages' 16.5, with say 8 [pretty typical] tasters, which comes from a series with a min/max of 16/17 is likely to be quite different from a wine where the min/max was, say, 13/19.

BTW Michael Broadbent has written such 'objective' reviews but the reader is left in no doubt viz his review of 2000 Pavie in Vintage Wine - 2* for him he said but 5* for wine competitions and another major audience. He even used the word 'impressive' in his tasting note.
no avatar
User

Florida Jim

Rank

Wine guru

Posts

1253

Joined

Wed Mar 22, 2006 1:27 pm

Location

St. Pete., FL & Sonoma, CA

Re: Extreme Variation in Evaluation?

by Florida Jim » Wed Dec 03, 2008 9:23 am

Sam Platt wrote:I was attempting to buy the 2005 Vinedos Alonso del Yerro Cuvee Maria, which I had read some good things about. I was not able to find it locally and started searching the internet for a reasonably prices source. In the process I stumbled on to a huge discrepancy between the Wine Spectator and the Wine advocate in their ratings for this wine. The Advocate rated it a 95 and gave a glowing review while the Spectator rated it 78 and trashed the wine. I typically rely on word of mouth recommendations, and I don’t get hung up on points, but I am troubled by the dramatically different ratings between the two sources. Assuming that the WS bottle was not flawed, and the bottle variation alone would not account for a 17 point score differential, how can two groups of experienced, professional wine tasters come to such different conclusions about a wine? I can understand a swing of +/- 5 points, but 17! That doesn’t say much for the objectivity and/or capability of the tasters, in my opinion.


Sam,
Just a suggestion: do not look at the score; simply read the notes side by side and see if there is anything in each which corresponds to the other. Maybe one will say "gentle oak" and the other will say "oak soup." If I read that, I would discern that there was noticeable oak.
I think the same thing can be done with aroma and flavor descriptions. I would try to get something out of them that might give you an inkling of what the wine really is, not just what each taster(s) thought of it.
Sometimes it doesn't work, sometimes it does.
But the numbers are not useful for that sort of thing.
Best, Jim
Jim Cowan
Cowan Cellars
no avatar
User

David Creighton

Rank

Wine guru

Posts

1217

Joined

Wed May 24, 2006 10:07 am

Location

ann arbor, michigan

Re: Extreme Variation in Evaluation?

by David Creighton » Wed Dec 03, 2008 9:54 am

are we even certain that the tasters were tasting the same wine? i mean when the wal mart wine of two buck chuck wins a gold medal, i don't assume that the bottle i buy in my store is anything like the one that got submitted to the competition. is the wine in question an estate wine? do they bottle an early version and then months later another that has been kept that time in oak?
david creighton
no avatar
User

Bill Spohn

Rank

He put the 'bar' in 'barrister'

Posts

9522

Joined

Tue Mar 21, 2006 7:31 pm

Location

Vancouver BC

Re: Extreme Variation in Evaluation?

by Bill Spohn » Wed Dec 03, 2008 11:16 am

Ah, the old point discussion.

Not a point person myself. Poor old Bob must get a bit disheartened sometimes when a significant percentage of his followers don't even look at what he said, in fact don't look past the numerical score. On my more cynical days (they tend to be later in the week) I think of suggesting that Parker offer a modified subscription to his newsletter for half price that gives nothing but a list of scores, no description whatsoever. My bet is that he would find far more takers than would be good for his self image.

But the one thing Bob is, is reliable. Even if you don't agree with his assessments, you should, with a bit of experience, be able to adjust your own expectations in one direction or the other. I'd note that this is getting harder to do as he farms out certain areas to other tasters.

Wine Speculator, OTOH, often practice ratings by committee, and there is little consistency unless you are looking at a single reviewer, Suckling perhaps, and can fairly develop an idea of how your taste relates to theirs. I haven't subscribed for years, but recall that at one point whenever I saw a rating under 85 (or especially under 80) for Italian wines, it meant that they least resembled international (the skeptical may read 'Californian') archetypes and most resembled traditional Italians, and I'd go and taste and usually buy them.

When you see large disparities it can also mean a big difference in the bottles tasted, of course. I always hate it when Bob reviews a wine and then never goes back to it again. Ten years later, the MM (Marching Morons) still use his reviews as gospel despite the current wine probably having very little resemblance to the original wine reviewed.

The other problem is when they do get a bad bottle. There was one Bordeaux that Parker vastly under rated, and that was therefor readily available for years at give-away prices - until a friend of mine that knows RP emailed him and told him that he should revisit the wine as it was much better than the review indicated. The revisitation resulted in a revised score and a predictably elevated sale price.
no avatar
User

Nigel Groundwater

Rank

Ultra geek

Posts

153

Joined

Sat Dec 08, 2007 2:08 pm

Location

London, UK

Re: Extreme Variation in Evaluation?

by Nigel Groundwater » Wed Dec 03, 2008 11:45 am

Sam Platt wrote:I was attempting to buy the 2005 Vinedos Alonso del Yerro Cuvee Maria, which I had read some good things about. I was not able to find it locally and started searching the internet for a reasonably prices source. In the process I stumbled on to a huge discrepancy between the Wine Spectator and the Wine advocate in their ratings for this wine. The Advocate rated it a 95 and gave a glowing review while the Spectator rated it 78 and trashed the wine. I typically rely on word of mouth recommendations, and I don’t get hung up on points, but I am troubled by the dramatically different ratings between the two sources. Assuming that the WS bottle was not flawed, and the bottle variation alone would not account for a 17 point score differential, how can two groups of experienced, professional wine tasters come to such different conclusions about a wine? I can understand a swing of +/- 5 points, but 17! That doesn’t say much for the objectivity and/or capability of the tasters, in my opinion.


Sam
The winery was apparently bought in 2002 so there are only 3 vintages available for comparison.

Having looked at the reviews of Vinedos Alonso del Yerro’s ‘standard’ and ‘Maria’ cuvees for the WS and WA for 2003/04/05 the only clear message is that the WA [RP for the first 2 years and Jay Miller for 05] seem to like the wines much more than the WS [all reviews by Thomas Matthews] – as I am sure you know this occasional sharp division of opinion over certain wines/areas is not unique.

These are therefore not panel reviews but are the opinion of single authors which presumably provides a greater potential for difference of opinion.

I believe the WA’s singular approach is known and the WS states: A taster's initials at the end of the tasting note [as they are for all these wines] indicate that the rating and review were created by that taster in one of our blind tastings. Other tasters may [there is no evidence they did] sit in on blind tastings in order to help confirm impressions. However, the lead taster always has the final say on the wine's rating and description.

For Vinedos Alonso del Yerro WA scores are low to mid 90s consistently whereas the WS scores are low to mid 80s, the exception being the 05 Maria at 78 where as you say the spread between the WA and WS rating is 17.

According to their archive the WS have only rated the Maria once [the 05; sometime in 2007] and interestingly rated the 05 standard cuvee at 87 – 9 points higher than the more expensive wine and the highest rating the WS have given any Vinedos Alonso del Yerro wine. Bear in mind these WS reviews of the 2 cuvees were separated in time by several months whereas the WA reviews were contemporaneous.

The WA ratings had the 05 Maria 1 point above the 05 standard cuvee where the differential versus the WS was 7 compared to 17 for the Maria.

Apart from the fact there appears to be a significant, general difference of opinion between the WS and the WA over the wines from this new Spanish operation what else might be in play is only available from the limited TNs – with the WS 05 Maria TN being a single, terse sentence.

Perhaps a clue comes in the words "rather tough" which might be partially echoed in the WA’s much more complimentary note which included “plenty of ripe tannin and a 60+ second finish. Give it 5-7 years of cellaring” –with an indicated drinking window of 2013-2025 which suggests that there is plenty to be resolved even though the WA note indicated it was more drinkable than the standard cuvee.

Interestingly IWC [Josh Raynolds] is closer to the WA opinion than the WS with 90+? for the 05 Maria and a point less for the standard cuvee – and similarly for the earlier years.

This appears to be a concentrated Spanish wine with plenty of tannin which is still tight and unresolved and requiring time before reaching its assessed potential. One reviewer is unclear that it ever will, another believes it will and a third seems a lot more certain – although even there the suggested drinking window begins over 4 years from now.

There is still the point that they were all tasting a different sample of wine at a different time and circumstances. And yes, IMO, a 17 points difference probably means this latter fact was a factor whatever the other differences between the tasters for this [type of] wine.
no avatar
User

Daniel Rogov

Rank

Resident Curmudgeon

Posts

0

Joined

Fri Jul 04, 2008 3:10 am

Location

Tel Aviv, Israel

Re: Extreme Variation in Evaluation?

by Daniel Rogov » Wed Dec 03, 2008 12:32 pm

As might be said: Oi vey is mei (more or less, woe is me) - the terrible score issue rears its head yet once again.

As far as I am personally concerned, scores have absolute meanings only in one circumstance - that the same critic tastes the same wine at the same tasting, from the same bottle, poured at the same time into two different numbered glasses and does not know that the wine in glass #11 is the same as that in glass #32. If the critic in question awards the wine 89 points on the first go-around and 90 on the second, no problem. If, on the other hand he/she awards the wine 89 on the first taste and 78 on the second that means that either his/concentration was not focused at that tasting or that there is something amiss with his/her palate on that day. In such cases tasting notes from the entire tasting should be discarded as having dubious value at best.

My own leeway for such "doubling-up" endeavors is plus or minus 1.5-2 points. More than that, the circular file comes into use.

As I have said on many occasions before, scores are nothing more than the individual critic's summing up of the quality of the wine. Not of its characteristics or its charms, but entirely of the quality. Them that reads scores alone may wind up with good wines but those wines may or not be to their taste. Them that comes to know the tastes, idiosyncracies and foibles of a given critic or set of critics may use scores as an adjunct to the tasting notes. Nothing less, nothing more.

On which I return to my glass of wine (the excellent 2003 Cabernet Sauvignon Special Reserve of Margalit).

Margalit, Special Reserve, Cabernet Sauvignon, 2003: Rich, ripe and concentrated, with layer after layer of dark plum, currant, anise, mocha, black cherries and sage. An oak-aged blend of Cabernet Sauvignon and Petite Sirah (87% and 13% respectively), this distinctly Old World wine has excellent balance between wood, lively acidity and well-integrated tannins. Complex and long. Drink now–2013. Score 93.

Best
Rogov
Next

Who is online

Users browsing this forum: AhrefsBot, ClaudeBot, Google [Bot] and 0 guests

Powered by phpBB ® | phpBB3 Style by KomiDesign