Risk is the potential harm that may arise from some present process or from some future event. It is often mapped to the probability of some event which is seen as undesirable. Usually the probability of that event and some assessment of its expected harm must be combined into a believable scenario (an outcome) which combines the set of risk, regret and reward probabilities into an expected value for that outcome. There are many informal methods which are used to assess (or to "measure" although it is not usually possible to directly measure) risk, and (for some applications) formal methods such as value at risk.
In futures trading Risk, is a loss of trading capital.
Risk is different from threat
In scenario analysis "risk" is distinct from "threat." A threat is a very low-probability but serious event - which some analysts may be unable to assign a probability in a risk assessment because it has never occurred, and for which no effective preventive measure (a step taken to reduce the probability or impact of a possible future event) is available. The difference is most clearly illustrated by the precautionary principle which seeks to reduce threat by requiring it to be reduced to a set of well-defined risks before an action, project, innovation or experiment is allowed to proceed.
A more specific example is the preparedness of the United States of America prior to the devastating attack on September 11th, 2001. Although the Central Intelligence Agency had often warned of a "clear and present danger" of using planes as weapons, this was considered a threat, not a risk. Accordingly, no comprehensive scenarios of probabilities and counter-measures were ever prepared for the type of attack that occurred. Taking a frequentist probability approach, a threat cannot be characterized as a risk without at least one specific incident wherein the threat can be said to have "realized". From that point, there is at least some basis to characterize a probability, e.g. "in the entire history of air travel, X flights have led to 1 incident of..." By contrast Bayesian probability methods would allow threats to be assigned a degree of belief, even if they had never happened before, and this could then be treated as a probability.
In information security a "risk" is defined as the probability that a threat will act on a vulnerability to cause an impact, in other words a risk represents the chance coincidence of all three elements. Threats in this context include deliberate/directed acts (e.g. by hackers) and undirected/random/unpredictable events (such as a lightening strike). Vulnerabilities are generally caused by weaknesses in the system of preventive controls, including missing or ineffective procedural or technical controls, bugs in systems etc. Impacts are adverse effects on organizations, individuals or indeed society at large. A vulnerability is not an issue per se unless a threat exploits it and causes an impact. Risk management therefore involves minimizing the threats, vulnerabilities and/or impacts.
Professions and governments manage risk
Means of measuring and assessing risk vary widely across different professions--indeed, means of doing so may define different professions, e.g. a doctor manages medical risk, a civil engineer manages risk of structural failure, etc.
A professional code of ethics is usually focused on risk assessment and mitigation (by the professional on behalf of client, public, society or life in general).
Some theorists of political science, notably Carol Moore and Jane Jacobs, emphasize that smaller political units and careful separation of the roles of regulator and trader can improve professional ethics and subordinate them to uniform risk limits that would apply to a particular locale, e.g. an entire urban area.
The political ideal of bioregional democracy arose in part in response to these ideals, and problems of professional jargons and associations alienating power from real people living in real places.
"A profession by definition is in a conflict of interest with respect to the risk passed on to its clients." - Steven Rapaport.
Risk as regret
Risk has no one definition, but some theorists, notably Ron Dembo, have defined quite general methods to assess risk as an expected after-the-fact level of regret. Such methods have been uniquely successful in limiting interest rate risk in financial markets. Financial markets are considered to be a proving ground for general methods of risk assessment.
However, these methods are also hard to understand. The mathematical difficulties interfere with other social goods such as disclosure, valuation and transparency.
In particular, it is often difficult to tell if such financial instruments are "hedging" (decreasing measurable risk by giving up certain windfall gains) or "gambling" (increasing measurable risk and exposing the investor to catastrophic loss in pursuit of very high windfalls that increase expected value).
As regret measures rarely reflect actual human risk-aversion, it is difficult to determine if the outcomes of such transactions will be satisfactory. Risk seeking describes an individual who cares more about the potential gains than about the expected gains from an investment. For example, an individual who invests in a small stock, knowing there is a large chance of losing some money, but a small chance of making a great deal of money.
In financial markets one may need to measure credit risk, information timing and source risk, probability model risk, and legal risk if there are regulatory or civil actions taken as a result of some "investor's regret".
Financial markets illustrate a more general problem in defining and assessing risk-- the ways that different types of risk combine.
It can be hard to see how the relative risks from different sources should affect one's decisions. For example, when treating a disease a doctor might have the choice of either using a drug that had a high probability of causing minor side effects, or carrying out an operation with a low probability of causing very severe damage.
According to the regret theory, the only way to resolve such dilemmas might be to find out more about the patient's life and ambitions. If, for instance, the patient's greatest desire centered on raising children, one might prefer the drug even if it limited their mobility or physical capacity somewhat. However, if the patient has already risked their own life several times in extreme sporting events, the decision to do so one more time and recover full capacities may be far preferable.
This highlights a major problem in professional ethics: knowing when the cognitive bias of the professional versus the client (or "patient") must dominate, and what choices each is best able to make.
Framing is a fundamental problem with all forms of risk assessment. The above examples: body, threat, price of life, professional ethics and regret show that the risk adjustor or assessor often faces serious conflict of interest, The assessor also faces cognitive bias and cultural bias, and cannot always be trusted to avoid all moral hazards. This represents a risk in itself, which grows as the assessor is less like the client.
For instance, an extremely disturbing event that all participants wish not to happen again may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable.
These human tendencies to error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.
But all decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously-wrong answers simply because it is socially painful to disagree.
One effective way to solve framing problems in risk assessment or measurement (although some argue that risk cannot be measured, only assessed) is to ensure that scenarios, as a strict rule, must include unpopular and perhaps unbelievable (to the group) high-impact low-probability "threat" and/or "vision" events.
This permits participants in risk assessment to raise others' fears or personal ideals by way of completeness, without others concluding that they have done so for any reason other than satisfying this formal requirement.
For example, an intelligence analyst with a scenario for an attack by hijacking might have been able to insert mitigation for this threat into the U.S. budget. It would be admitted as a formal risk with a nominal low probability. This would permit coping with threats even though the threats were dismissed by the analyst's superiors.
Even small investments in diligence on this matter might have disrupted or prevented the attack-- or at least "hedged" against the risk that an Administration might be mistaken.
Although military decision making tends to dominate risk theory, its most sophisticated daily practice is in the insurance industry,
The insurers have well-defined roles of actuary, underwriter, agent, auditor and adjustor. Each of these is an assessor in somewhat different circumstances or stages of the insuring, reinsuring, adjustment, recovery and claims payment processes.