[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

BGonline.org Forums

Harry Zilli effect?

Posted By: Daniel Murphy
Date: Tuesday, 8 February 2011, at 9:35 p.m.

In Response To: Harry Zilli effect? (Chris Haviland)

(1) Did your club implement this idea and if so, how'd it work out?

(2) Re If other clubs did the same thing with the same bot, this common denominator would slowly over time shed some light on the relative strength of local clubs.

Very slowly, no? Your idea would mean adding one world class level player to every local club, but this bot-player would only play when the number of human players necessitated one or more byes. I guess the math could be worked out for any set of assumptions -- one of which would have to be whether there were rating comparison problems to begin with.

(3) Re Some people might show up exclusively after bot-won events, but they would be properly ridiculed.

Why "properly," for preferring to play for double the prize money for the same entry fee? I could imagine, too, some players refusing to play unless number of participants = 2N and thus no byes and bots. I mean, if the ethics are debated, everyone will have a view, but I'd just note that if you want to encourage social responsibility and healthy clubitude (like everyone showing up weekly) then it makes sense to tune your tournament parameters so that self-interest naturally aligns with social interests.

(4) Backing up a bit to the original subject of a national ratings list including the rating of local tournaments, and insufficient mixing --

There'd be some very good reasons not to rate human vs. bot club matches in a national ratings list.

My impression (from past discussions here and there) is that the problem of insufficient mixing is more apparent than real, but there's surely a lot of data available for crunching. For example

(1) Norway has 12 clubs, widely separated geographically, with perhaps 30-40% of all players in Oslo, and most of the other clubs quite small. Most clubs have weekly competition, and there are few events that bring players together nationally. Is there a ratings mixing problem there?

(2) Denmark has a big split between players in the east (on the island where Copenhagen lies) and players in the west (Jutland and Fyn), and also a stratified team tournament where a lot of players play 10-20 matches only within their division. How much compensatory mixing is there? Are west (or east) or elite (or lower division) players over or underrated? Of course, there are weekend tournaments, too, in which open, intermediate and beginning players don't mix. And yet if you look at the national ratings lists, the best players tend to be at the top. Is that because there's enough mixing for segregated play not to matter much?

(3) Don't most chess players, say, in the USCF, only play locally? Are ratings comparisons a big problem for that reason?

(4) Let's turn to where there's a real problem, although it can be debated how much of a problem it is. USBG has rated 35 ABT tournaments, and if you arrange the rating list by rating, there are a lot of intermediate players more highly rated than presumably better open players. For example, take the 35 players with the most masterpoints. Most but not all are open players:

Name Rank by masterpointsRank by rating
Dorn Bishop 121
Dean Adamian 212
Mike Corbett 273
David Rubin 24
Mary Hickey 15
Joe Russell 718
Kit Woolsey 619
Jesse Eaton 2222
Gary Fries 1637
Rory Pascar 1354
Scott Casty 3053
Stick Rice 558
Ray Fogerlund 359
Gregg Cattanach 2869
Tak Morioka 1176
Adam Bennett 1480
Fred Kalantari 2094
Neil Kazaross 1596
Mike Senkiewicz 18102
Larry Taylor 17111
Richard Munitz 9116
Ed O'Laughlin 10124
Bob Koca 35133
Philippe Salnave 32149
Lucky Nelson 25158
John O'Hagan 4183
Bob Steen 31206
Malcolm Davis 19234
Alan Grunwald 24285
Carter Mattig 33305
Gary Bauer 34307
Bill Riles 29374
Ed Bennett 8411
Bill Davis 26504
Julius High 23875

If you sort names by flight or account for participation, you get other data to play with. Peruse the list and draw your own conclusions. I have only questions ;-

(a) How big of a problem is it?

(b) Will it sort itself out if left alone?

(c) How different would the ranking be if the formula did not use the FIBS-like "boost" for new players?

(d) Do solutions create other problems?

(e) Could an easy solution be to simply list open and below-open players separately?

(f) If USBGF rated club events, would this make the problem of comparing ratings better or worse? Keep in mind that most players rated so far have and will play only (or nearly so) in local events (perhaps their club weeklies and the ABT tournament nearest them).

(g) How many ABT players play in local clubs, too?

Suggestions to change the rating system (both the formula and administration) come up from time to time in Denmark. In one such discussion, one of the board members aptly said something like this: the acceptance of a rating system is based on trust, so let's be sure we understand the problems, solutions and consequences of "fixes" before making changes.

Messages In This Thread

 

Post Response

Your Name:
Your E-Mail Address:
Subject:
Message:

If necessary, enter your password below:

Password:

 

 

[ View Thread ] [ Post Response ] [ Return to Index ] [ Read Prev Msg ] [ Read Next Msg ]

BGonline.org Forums is maintained by Stick with WebBBS 5.12.