Guten Morgen Earthling,
Ich wünsche Dir noch einen guten und erfolgreichen Wiederholungskurs!
Vielen Dank für das Video des Boxkampfes "
Ivan Drago me against the world", der irgendwie die sub-prime Krise konkreter darstellt, als tausend Worte!
Wie Du weisst unterstehen auch Börsen den gleichen Gesetzen wie jeder Markt, nämlich Angebot und Nachfrage und wenn wir plötzlich ein Drittel weniger Aktive haben werden, dafür aber umso mehr Rentner, wird eben die Nachfrage nach Aktien sinken, das Angebot der Rentner aber, welche Aktien verkaufen wollen, steigen und die Kurse werden, wohl oder übel, sinken!
Nichts logischer als dieses Horrorszenario für die nächsten 27 Jahre!
Und das UBS Personal beginnt es schon jetzt zu spühren, denn das
UBS-Personal ist um zehn Milliarden ärmer, denn, laut NZZ Analyse, und die teile auch ich, hat der Kurssturz der Aktie böse Folgen für Mitarbeiter: Die Hälfte hält Aktien und Optionen der Grossbank!
Dann kommt eine weitere, durchaus Ernst zu nehmende Nachricht von der NZZ, deren Inhalt auch ich nicht bestreiten möchte:
Vor einer weiteren Kapitalerhöhung der UBS
Ok, eine erneute Kapitalerhöhung steht an, aber wofür wird sie denn so dringend gebraucht?
Die Wahrscheinlichkeit, dass die UBS im ersten Quartal weitere Abschreibungen vornehmen und einen Verlust ausweisen wird, ist laut Analytikern gross. Soll die Kapitalstärke gewahrt bleiben, dürfte die Bank kaum um eine weitere Kapitalerhöhung herumkommen.
Und wieviel soll denn erneut abgeschrieben werden?
Eine neuerliche Stärkung der Eigenkapitaldecke dürfte notwendig sein, um das Vertrauen der Kunden in die Solidität der Bank zu wahren. Dies umso mehr, als nach den starken Marktturbulenzen im März die Wahrscheinlichkeit gestiegen ist, dass die UBS für das erste Quartal weitere Abschreibungen auf ihren notleidenden Subprime-Beständen vornehmen muss.
Analytiker gehen von einem Wertberichtigungsbedarf von bis zu CHF 15 Milliarden aus und stellen bereits einen Quartalsverlust in Rechnung. Rote Quartalszahlen könnten zur Folge haben, dass die Kernkapitalquote («Tier 1») unter die bankintern anvisierte «Wohlfühlmarke» von 10% zurückgleitet. Träte das ein, müsste die UBS mit einer Zurückstufung ihrer Ratings, höheren Refinanzierungskosten und einem für das Vermögensverwaltungsgeschäft schädlichen Vertrauensverlust rechnen. Vor diesem Hintergrund spricht viel für eine weitere Kapitalerhöhung.
Mit den bereits abgeschriebenen CHF 20 Milliarden und denn noch abzuschreibenden CHF 15 Milliarden wären wir dann beim Titel meines threads vom 6. November 2008 angelangt "
Subprime Krise der UBS: US$38.8 Milliarden at Risk"
sub-prime mortgage bluesChapter 10: The Scandal of Prediction hat geschrieben:We are demonstrably arrogant about what we think we know. We certainly know a lot, but we have a built-in tendency to think that we know a little bit more than we actually do, enough of that little bit to occasionally get into serious trouble.
Why on earth do we predict so much? Worse, even, and more interesting: Why don’t we talk about our record in predicting? Why don’t we see how we (almost) always miss the big events? I call this the scandal of prediction.
On The Vagueness of Catherine’s Lover Count
Let us examine what I call epistemic arrogance, literally, our hubris concerning the limits of our knowledge. Epistēmē is a Greek word that refers to knowledge. Our knowledge does grow, but it is threatened by greater increases in confidence, which make our increase in knowledge at the same time an increase in confusion, ignorance, and conceit.
Take a room full of people. Ask each person in the room to independently estimate a range of possible values for a statement in such a way that they believe that they have a 98 percent chance of being right, and less than 2 percent chance of being wrong. In other words, whatever they are guessing has about a 2 percent chance to fall outside their range. For example:
“I am 98 percent confident that Catherine II of Russia had between 34 and 63 lovers.â€
You can make inferences about human nature by counting how many people in your sample guessed wrong: it is not expected to be too much higher than two out of a hundred participants. Note that the people are ---- to set their range as wide as they want: you are not trying to gauge their knowledge but rather their evaluation of their own knowledge.
Now the results. Like many things in life, the discovery was unplanned, serendipitous, surprising, and took a while to digest. The researchers came out befuddled. The 2 percent error rate turned out to be close to 45 percent in the population being tested. Are we twenty-two times too comfortable with what we know? It seems so.
Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the rage of possible uncertain states (i.e., by reducing the space of the unknown). The applications of this distortion extend beyond the mere pursuit of knowledge. To take an obvious example, think about how many people divorce. Almost all of them are acquainted with the statistic that between one-third and one-half of all marriages fail, something the parties involved did not forecast while tying the knot. Of course, “not us,†because “we get along so well†(as if others tying the knot got along poorly).
Information is Bad for Knowledge
You may wonder how learning, education, and experience affect epistemic arrogance—how educated people might score on the above test, as compared to the rest of the population. You will be surprised by the answer: it depends on the profession. The problem is that our ideas are sticky: once we produce a theory, we are not likely to change our minds—so those who delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate. Two mechanisms are at play here: confirmation bias and belief perseverance, the tendency not to reverse opinions you already have. Remember that we treat ideas like possessions, and it will be hard for us to part with them.
What Moves and What Does Not Move
We are shown by a class of expert-busting researchers such as Paul Meehl and Robyn Dawes that the “expert†is the closest thing to a fraud, performing no better than a computer using a single metric, their intuition getting in the way and blinding them. On the other hand, there is abundant literature showing that many people can beat computers thanks to their intuition. Which one is correct?
The psychologist James Shanteau undertook the task of finding out which disciplines have experts and which have none.
Experts who tend to be experts: livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, accountants, grain inspectors, photo interpreters.
Experts who tend to be … not experts: stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, councilors, personnel selectors, economists, financial forecasters, political scientists.
Simply, things that move, and therefore require knowledge, do not usually have experts, while things that don’t move seem to have some experts. In other words, professions that deal with the future and base their studies on the nonrepeatable past have an expert problem (with the exception of the weather).
Another way to see it is that things that move are often Black Swan-prone. Experts are narrowly focused persons who need to “tunnel.†In situations where tunneling is safe, because Black Swans are not consequential, the expert will do well.
You cannot ignore self-delusion. The problem with experts is that they do not know what they do not know. Lack of knowledge and delusion about the quality of your knowledge come together—the same process that make you know less also makes you satisfied with your knowledge.
Events Are Outlandish
The problem with prediction is a little more subtle. It comes mainly from the fact that we are living in Extremistan, not Mediocristan. Our predictors may be good at predicting the ordinary, but not the irregular, and this is where they ultimately fail. All you need to do is miss one interest-rates move, from 6 percent to 1 percent in a longer-term projection (what happened between 2000 and 2001) to have all your subsequent forecast rendered completely ineffectual in correcting your cumulative track record. What matters is not how often you are right, but how large your cumulative errors are.
And these cumulative errors depend largely on the big surprises, the big opportunities. Not only do economic, financial, and political predictors miss them, but they are quite ashamed to say anything outlandish to their clients—and yet events, it turns out, are almost always outlandish. Forecasters tend to fall closer to one another than to the resulting outcome. Nobody wants to be off the wall.
I Was “Almost†Right
The psychologist Philip Tetlock studied the business of political and economic “experts.†He asked various specialists to judge the likelihood of a number of political, economic, and military events occurring within a specified time frame (about five years ahead). The study revealed that experts’ error rates were clearly many times what they had estimated. His study exposed an expert problem: there was no difference in results whether one had a PhD or an undergraduate degree. Well-published professors had no advantage over journalists. The only regularity Tetlock found was the negative effect of reputation on prediction: hose who had a big reputation were worse predictors than those who had none.
But Tetlock’s focus was not so much to show the real competence of experts (although the study was quite convincing with respect to that) as to investigate why the experts did not realize that they were not so good at their own business, in other words, how they spun their stories. He therefore dug further into the mechanisms by which his subject generated ex post explanations.
You tell yourself that you were playing a different game. Despite your vast knowledge in a particular area, some vital information was not available to you. It is not your skills that are to blame.
You invoke the outlier. Something happened that was outside the system, outside the scope of your science. Given that is was not predictable, you are not to blame.
The “almost right†defense. Retrospectively, with they benefit of a revision of values and an informational framework, it is easy to feel that is was a close call. These “experts†were lopsided: on the occasions when they were right, they attributed it to their own depth of understanding and expertise; when they were wrong, it was either the situation that was to blame, since it was unusual, or, worse, they did not recognize that they were wrong and spun stories around it. They found it difficult to accept that their grasp was a little short. But this attribute is universal to all our activities : there is something in us designed to protect our self-esteem.
We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad.
The Character if Prediction Errors
Like many biological variables, life expectancy is from Mediocristan, that is, it is subject to mild randomness. It is not scalable, since the older we get the less likely we are to live. In a developed country a newborn female is expected to die at around 79, according to insurance tables. When she reaches her 79th birthday, her life expectancy, assuming that she is in typical health, is another 10 years. At the age of 90, she should have another 4.7 years to go. At the age of 100, 2.5 years. At the age of 119, if she miraculously lives that long, she should have about nine month left. As she lives beyond the expected date of death, the number of additional years to go decreases. This illustrates the major property of random variable related to the bell curve. The conditional expectation of additional life drops as a person gets older.
With human projects and ventures we have another story. These are often scalable. With scalable variables, the ones from Extremistan, you will witness the exact opposite effect. Let’s say a project is expected to terminate in 79 days. On the 79th day, if the project is not finished, it will be expected to take another 25 days to complete. But on the 90th day, if the project is still not completed, it should have about 58 days to go. On the 100th, it should have 89 days to go. On the 119th, it should have an extra 149 days. On day 600, if the project is not don, you will be expected to need an extra 1,590 days. As you see, the longer you wait, the longer you will be expected to wait.
The subtle but extremely consequential property of scalable randomness is unusually counterintuitive. We misunderstand the logic of large deviations from the norm. Scalable randomness is central to our misunderstanding of the business of prediction.
Don’t Cross a River if it is (on Average) Four Feet Deep
Forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty.
The first fallacy: variability matters. The first error lies in taking a projection too seriously, without heeding its accuracy. Yet, for planning purposes, the accuracy in your forecast matter far more than the forecast itself. You would take a different set of clothes on your trip to some remote destination if I told you that the temperature was expected to be seventy degrees Fahrenheit, with an expected error rate of forty degrees than if I told you that margin of error was only five degrees. The policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number.
The second fallacy lies in failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the difference between near and far futures. Yet the degradation in such forecasting through time becomes evident through simple introspective examination. Think about the forecasts made in 1975 about the prospects of the new millennium. Many events have taken place and new technologies have appeared that lay outside the forecasters’ imaginations; many more that were expected to take place or appear did not do so. Our forecast errors have traditionally been enormous, and there may be no reasons for us to believe that we are suddenly in a more privileged position to see into the future compared to our blind predecessors.
The third fallacy, and perhaps the gravest, concerns a misunderstanding of the random character of the variables being forecast. Owing to the Black Swan, these variables can accommodate far more optimistic—or far more pessimistic—scenarios than are currently expected.
What is the implication here? Even if you agree with a given forecast, you have to worry about the real possibility of significant divergence from it. These divergences may be welcomed by a speculator who does not depend on steady income; a retiree, however, with set risk attributes cannot afford such gyrations.
Moral der Geschichte: Vertrau nie den Neokonservativen, vertrau nie der SVP, vertrau nie den Republikanern!
Gruss
Zuyox