| |
BGonline.org Forums
Harry Zilli effect?
Posted By: Daniel Murphy In Response To: Harry Zilli effect? (Chris Haviland)
Date: Tuesday, 8 February 2011, at 9:35 p.m.
(1) Did your club implement this idea and if so, how'd it work out?
(2) Re If other clubs did the same thing with the same bot, this common denominator would slowly over time shed some light on the relative strength of local clubs.
Very slowly, no? Your idea would mean adding one world class level player to every local club, but this bot-player would only play when the number of human players necessitated one or more byes. I guess the math could be worked out for any set of assumptions -- one of which would have to be whether there were rating comparison problems to begin with.
(3) Re Some people might show up exclusively after bot-won events, but they would be properly ridiculed.
Why "properly," for preferring to play for double the prize money for the same entry fee? I could imagine, too, some players refusing to play unless number of participants = 2N and thus no byes and bots. I mean, if the ethics are debated, everyone will have a view, but I'd just note that if you want to encourage social responsibility and healthy clubitude (like everyone showing up weekly) then it makes sense to tune your tournament parameters so that self-interest naturally aligns with social interests.
(4) Backing up a bit to the original subject of a national ratings list including the rating of local tournaments, and insufficient mixing --
There'd be some very good reasons not to rate human vs. bot club matches in a national ratings list.
My impression (from past discussions here and there) is that the problem of insufficient mixing is more apparent than real, but there's surely a lot of data available for crunching. For example
(1) Norway has 12 clubs, widely separated geographically, with perhaps 30-40% of all players in Oslo, and most of the other clubs quite small. Most clubs have weekly competition, and there are few events that bring players together nationally. Is there a ratings mixing problem there?
(2) Denmark has a big split between players in the east (on the island where Copenhagen lies) and players in the west (Jutland and Fyn), and also a stratified team tournament where a lot of players play 10-20 matches only within their division. How much compensatory mixing is there? Are west (or east) or elite (or lower division) players over or underrated? Of course, there are weekend tournaments, too, in which open, intermediate and beginning players don't mix. And yet if you look at the national ratings lists, the best players tend to be at the top. Is that because there's enough mixing for segregated play not to matter much?
(3) Don't most chess players, say, in the USCF, only play locally? Are ratings comparisons a big problem for that reason?
(4) Let's turn to where there's a real problem, although it can be debated how much of a problem it is. USBG has rated 35 ABT tournaments, and if you arrange the rating list by rating, there are a lot of intermediate players more highly rated than presumably better open players. For example, take the 35 players with the most masterpoints. Most but not all are open players:
Name Rank by masterpoints Rank by rating Dorn Bishop 12 1 Dean Adamian 21 2 Mike Corbett 27 3 David Rubin 2 4 Mary Hickey 1 5 Joe Russell 7 18 Kit Woolsey 6 19 Jesse Eaton 22 22 Gary Fries 16 37 Rory Pascar 13 54 Scott Casty 30 53 Stick Rice 5 58 Ray Fogerlund 3 59 Gregg Cattanach 28 69 Tak Morioka 11 76 Adam Bennett 14 80 Fred Kalantari 20 94 Neil Kazaross 15 96 Mike Senkiewicz 18 102 Larry Taylor 17 111 Richard Munitz 9 116 Ed O'Laughlin 10 124 Bob Koca 35 133 Philippe Salnave 32 149 Lucky Nelson 25 158 John O'Hagan 4 183 Bob Steen 31 206 Malcolm Davis 19 234 Alan Grunwald 24 285 Carter Mattig 33 305 Gary Bauer 34 307 Bill Riles 29 374 Ed Bennett 8 411 Bill Davis 26 504 Julius High 23 875 If you sort names by flight or account for participation, you get other data to play with. Peruse the list and draw your own conclusions. I have only questions ;-
(a) How big of a problem is it?
(b) Will it sort itself out if left alone?
(c) How different would the ranking be if the formula did not use the FIBS-like "boost" for new players?
(d) Do solutions create other problems?
(e) Could an easy solution be to simply list open and below-open players separately?
(f) If USBGF rated club events, would this make the problem of comparing ratings better or worse? Keep in mind that most players rated so far have and will play only (or nearly so) in local events (perhaps their club weeklies and the ABT tournament nearest them).
(g) How many ABT players play in local clubs, too?
Suggestions to change the rating system (both the formula and administration) come up from time to time in Denmark. In one such discussion, one of the board members aptly said something like this: the acceptance of a rating system is based on trust, so let's be sure we understand the problems, solutions and consequences of "fixes" before making changes.
| |
BGonline.org Forums is maintained by Stick with WebBBS 5.12.