http://www.bayareaveg.org/ug/featured.htm

Tammy

]]>Jon, I agree with you. The ‘currency’ of a review is also an important factor. We are considering making that a factor in any revision we do to the Top Ten list. Thanks for updating your review about Manzanita. I wish more UG users would also take the extra step to also do reviews.

Erhhung, I like the ‘trusted reviewer’ idea. I am not sure if we will see the return on investment for this (i.e. enough reviewers to make this worthwhile for the amount of volunteer effort for the coding/testing.) Far more people use UG for searches, etc than for doing reviews.

So, in that vein, I would encourage everyone who has replied to this thread to take a minute right now and review your favorite 5 restaurants in the Guide. If you reviewed them and it was over 6 months ago, update the review. If your favorite restaurant isn’t in the Guide, then please add it.

Thanks,

Tammy

People who are curious for more information can look at the individual reviews to get more information, and notice both the frequency of the reviews and also the dates of their submission.

If any changes were to be made for “weighting” the average, I would think that recent reviews should take precedence over old reviews.

For instance, I have eaten regularly for several years at Manzanita Restaurant in Oakland, which, IMHO, has gone through some ups and downs. I recently upgraded my review from 4 stars to 5, to reflect what I think are positive changes in the restaurant.

If you want to play with weighting the average, you could assign a factor, to each vote. For example, reviews made within the previous year could count as one full vote. Reviews made from 12-24 months ago could count as 0.8 votes, those made 24-36 months ago could count as 0.6, etc. The weighted average would be:

[the sum of all ratings, each one multiplied by its factor], divided by: [the sum of all the factors]

But that might not be worth the trouble…

-Jon

]]>So I support the \top ten by xyz\ lists approach, and let people decide from which angle they’d like to see from.

]]>When I look up restaurants in the UG, I normally scan through all or most of the reviews, and one can see at a glance how many reviews a restaurant has, and also how variable the reviews are. I think people can be trusted to make reasonable assessments based on the number and range of reviews; most people understand that if there are only a few reviews, then the results might not be representative. My thoughts anyway…

]]>Re my comment about Millennium, I just brought that up as an example of the possibly arbitrary nature of using the number of reviews to weight the rating. A separate issue that arises in a weighted system in this situation is price/cost of the food. Which is better, a 4.5-star gourmet place, or a 4.9-star take-out lunch place? Obviously that depends on what the eater’s desires are for any particular meal/occasion. Anyway, it all can get very complicated very quickly.

Perhaps another search variable for restaurants (separate from the top 10 list) could be Average Cost.

Cheers!

–Mark

Mark, as to why Millennium only has 34 reviews — only 34 people have reviewed it. I think there are a lot of people who use the Guide as a reference, but unfortunately far fewer do reviews. Any suggestions for getting more people to do reviews?

And, just to be clear, if we did implement a rating change that is no longer a straight average, we would have some explanation and link to what it is and how it’s calculated so it’s transparent to everyone.

Greg, I agree, this is more of a policy issue than simply a math calculation. Thank you for adding that critical distinction. I will need to take a closer look at what you suggest (re: policy 1, policy 2). I agree that a negative reviews amidst generally positive ones can be an isolated incident. At one point in time, we discussed the value of discarding the highest and lowest reviews from the rating calculation.

I agree with you and Will that perhaps adding standard deviation would also be helpful.

I think there are many types of calculations out there and it’s just a matter of finding the most meaningful one.

Thanks,

Tammy

greg’s idea of using the standard deviation to chart the amount

of variance from review to review makes a lot of sense.

i also agree that it would be beneficial to keep the usual review

ratings (straight average), while posting the amount of variance

as a _seperate_ number — one that helps the reader understand

the significance of the basic average score.

Udupi Palace has 9 reviews and a rating (average) of 4.33″

I think this is a policy issue more than a math issue. And the UG has already implemented one good policy for the top 10, e.g. minimum of ten reviews.

From a math perspective one thing you could do is publish the standard deviation, which is an indicator of how variable the opinions are about the restaurant.

From a policy perspective you could do something like one of the following, to give greater weight to restaurants with more reviews:

1 – add .005 points for each review.

2 – reduce the gap between their score and 5.0 based upon, for example, number of reviews divided by 100.

With option 1 Millenium goes from 4.34 to 4.51 and Udupi goes from 4.33 to 4.37.

With option 2 Millenium goes from 4.34 to 4.56 and Udupi goes from 4.33 to 4.39.

In both cases the policy says that a restaurant with more customers willing to write a review should get a greater weighting. The policy has the effect of under-weighting negative reviews; which is OK with me because negative reviews are often irrelevant or immaterial rants from customers who did not try the place more than once. Policy 2 is more aggressive in the under-weighting, but has the downside that a bad restaurant with 100 reviews gets a 5.00; policy 1 might be referable in this regard. Policy 2 could be constrained to only eliminate a maximum of, for example, 50% of the gap.

]]>Millennium has been around a long, long time, and is THE veg*n showcase restaurant, so why does it have only 34 reviews? Udupi Palace is also great, but it’s clearly an “ethnic” destination, so one would expect fewer reviews, no? And if the number of reviews has weight (beyond a minimum), wouldn’t that encourage fake reviews or incented-or-paid-for reviews? And I still think it’s unfair to new(er) restaurants.

I think you’d have to have separate lists, e.g., Top 10 Indian, Top 10 Thai, Top 10 General. And that’ll get messy.

On the other hand, if it were made clear that the list is a weighted list, i.e., not based solely on the rating, then you can choose any weight you want (and let the readers know) for all variables; sort of like the ratings of colleges and universities.

Then you’d have to have a rating system for the number of reviews, or some other formula. For example, the number of reviews might be weighted at 10% of the overall score, and a rating based on 10-20 reviews would get 20% of that 10%, 21-40 reviews gets 40%, etc. Or that 10% can be based on some other formula (won’t go into further detail here).

I just saw Steve’s reply, and Yes, exactly….the list can be sorted or even just eyeballed for number of reviews. After all, it’s a list of only 10, not of 100.

–Mark

]]>