In terms of UI, the differences vs gmail seem trivial and possibly in the wrong direction. The three column format is identical; it is mainly cleaner because it removes useful features: inline email text preview, a prominent search box, navigation to other frequently used services, e.g. drive. On first glance sparse designs look better, but for frequently used software people tend to like the greater data density in practice. I also image that they will introduce l their own navigation bar to other services at some point, so that minor distinction would be temporary. On the hand it is far better than Microsoft's existing offering, and being able to integrate with third party social services rather than g+ is a substantial difference they should continue to run with.
There's a navigation bar to other services (click on the chevron next to the "Outlook" title). It's just less prominent and 2 clicks instead of one. Which I think is the right tradeoff, as wanting to navigate to another service isn't as common as Google would like it to be.
Not sure if it would be such a economic bomb. Assuming there are many mineable asteroids, others would enter the market. So the long term effect would be that the cost of natural resources approaches the marginal cost of extraction. And markets should have time to adjust as there is increasingly better information on what the initial extractions will look like. It will certianly disrupt the commodities sector, ending many companies and giving rise to incredibly large new ones. And the implications of the new commodity prices would ripple through the rest of the economy. I think it will seem like a bomb to those companies displaced, but for the economy as a whole not as much.
I would say that if space mining is more efficient (both productivity wise and/or environmental wise), then doing it will be better overall for the whole of earth.
The author is making a straw man argument, though it does not seem to be intentional (edit: I am referring to Rubinstein's original analysis). The author points out that using data for a single class and single year is not a reliable indicator (second graph). That is uncontested and exactly why ratings are based on 3 years of data and multiple classes where possible. Even this cannot provide a single, accurate percentile value. That is why confidence intervals are used and are rather prominently displayed in all the graphs; e.g. <http://www.nytimes.com/schoolbook/school/656-ps-009-teunis-g...;
Given the example above, there is one teacher who has is 50th percentile for career math, but the confidence interval (CI) indicates that this may really be anywhere from well below average to well above average. In contrast there is another teacher with a value of 3 and the highest bounds of the CI still place them well below average. Conversely there is a teacher that is 90th pctl and even the lowest bounds of the the confidence interval still makes this a highly effective teacher.
In sum, the data make evident that a few teachers are fairly unambiguously ineffective (at least with regards to test results), a few are unambiguously effective, but for the majority the most that can be said is that they are neither the best nor the worst, but somewhere in the middle.
So if you reserve yourself to making conclusions merited by the data, then there is some value to this (i.e. identifying the extremely ineffective and effective). As it stands now, the teacher with a 3 may very well have tenure and be paid considerably more that the teacher with a 90 (given that pay is a function largely of seniority and education, and nearly all teachers have tenure). That is what this exercise is meant to address.
So you definitely cannot assign a fine-grained single value to every teacher (and the authors analyses speak to that issue), but there does not mean you cannot make any conclusions at all. If folks make conclusions that exceed what is supported by the data that is a fault of the analyst or perhaps a function of poor communication or visualization, not evidence that the data is junk.
That said, the test could be improved (and supposedly are being improved) and the very least the measurements place more focus on the subjects being tested.
And the teacher effectiveness data is only 40% of the new approach to evaluating teachers in NY; the other 60% including peer evaluation. It may be useful if that became part of the public data so a more balanced picture is available; but it is unclear if that data would be public.
So if you reserve yourself to making conclusions merited by the data, then there is some value to this (i.e. identifying the extremely ineffective and effective).
Yes. Anyone who cares about the education of children will welcome any further improvements in evaluating teachers, but there is already actionable information available today. Bill Gates makes the good point that the best system of evaluating teachers should lead to interventions to help each teacher do better. But the existing performance gap, which may also be shown by which teachers other teachers who have children choose for their own children, could guide a system of encouraging the worst performers to seek other occupations
Current accountability reform places too much faith in the power of incentives and hence many metrics (not only teacher effectiveness and not only the NYC accountability system) are not directed to offering any sort of guidance towards improving schools. The NYC Progress Reports, which give schools grades, are another prime example of this. Personally, I hope that the next stage in accountability will see a shift to making the data more actionable from the perspective of practitioners and it is something I have been encouraging for a few years. I think part of the obstacle, though perhaps not the primary one, is the criticism that measurements are not accurate. So given the option of re-thinking how to make actionable reports or measurements vs. just improving what exists (via extending and revising statistical methods, refining business rules, improving data quality, and including additional data sources) the latter is winning out even in areas where there may be diminishing returns to doing this.
On the other hand, one could argue that the issue of how to improve the schools is addressed by other projects or other areas in the system, and it sufficient for these tools just to make clear and accurate evaluations. We are dealing with a big problem, both in terms of depth and significance. So there is room to approach this from many angles.
Regarding the issue of making the data public (i.e. Gates' Op Ed), there are competing values at play. Yes, it will be misused and some folks will be unfairly hurt. I would rather have data public (not just the teacher effectiveness data), and allow everyone the option to create there own diverse conclusions, perform their own analyses (like the one linked to), and as a society we can become more sophisticated and knowledgeable about these issues. The alternative is that the only people that will has access to this data is bureaucrats with a specific policy agenda and a handful of academics. Broader society, especially parents, has a right to weigh-in on these issues and be a part of this discussion and they cannot do this effectively without open data. To reiterate, I don't want technocrats driving education (that is part of the broader problem with current reform); hopefully open data such as this can limit that.
The solution is to do a better job of disclosing the information and providing the necessary context rather than just hide it. NYTimes did a pretty sweet job in this respect . I like where they are going with schoolbook.
I would like to see the media and public consistently push government for better open data. Right now that is not happening, and agencies are not going to make big changes in this area in the absence of that pressure.
You're right, I did take the route of attacking single-metric incentive systems despite the fact that they're not 100% single-metric. I wanted to get a point across that I think still needs getting across.
I'm not arguing against the use of data in helping teachers understand their effectiveness for one second, and I mention in the article that the data becomes more correlated and useful as more years are included.
"If folks make conclusions that exceed what is supported by the data that is a fault of the analyst or perhaps a function of poor communication or visualization, not evidence that the data is junk."
...completely agree. If my article seemed to argue that "metrics are dangerous" instead of "metrics are dangerous if you choose to publish them publicly while simultaneously using them as a significant component in compensation calculations," then I missed my intended point.
I was speaking to Rubinstein's underlying article. Sorry for the ambiguity. I strongly agree with you regarding the importance of data being used to empower relevant stakeholders. This issue has been on my mind a lot lately. I would even take it a step further: the priority should be empowering the individuals closest to the data and then working outward. So first priority is making the data empowering for students, e.g. so they have the access and tools to be more reflective about their individual results, learning practices, strengths, and so forth. Depending on age, parents would be here as well. Then teachers. And last should be using the data to empower administrators or bureaucrats. What we have seen is precisely the opposite - the people at greatest distance from the activities generating the data, i.e. the actual learning activities, are most empowered by it. This is reflective of big data business models in general. And I think that is at the heart of where the use of data in education has gone most astray. And this issue is relevant not only to government, but edutech companies as well. Are they maximizing empowering themselves with the data they are collecting to the detriment of empowering students, etc.? Who owns the insights extracted from the data? Are students free to extract their own data and take it elsewhere? etc.
It seems Khan Academy has taken this student-first approach as well and has put it into practice. I would be interested in hearing more about there philosophies, practices, or intents with respect to these other dimensions of educational data.
We have "focus on the student" hanging all over our office (which is pretty tricky b/c we change offices all the time these days for reasons I won't get into :) ). It's easy to see how other agendas could creep into educational organizations (especially if you're worried about profit), but for us right now...and hopefully forever...the student is our clear priority.