In August , Birny Birnbaum, the executive director of the Center for Economic Justice, asked the [NAIC] Market Regulation committee to train analysts to detect “dark patterns” and to define dark patterns as an unfair and deceptive trade practice.
The term “dark patterns” refers to techniques an online service can use to get consumers to do things they would otherwise not do, according to draft August meeting notes included in the committee’s fall national meeting packet.
Dark pattern techniques include nagging; efforts to keep users from understanding and comparing prices; obscuring important information; and the “roach motel” strategy, which makes signing up for an online service much easier than canceling it.
OpenAI inside Excel? How can you use an API key to connect to an AI model from Excel? This video shows you how. You can download the files from the GitHub link above. Wouldn’t it be great to have a search box in Excel you can use to ask any question? Like to create dummy data, create a formula or ask about the cast of the The Sopranos. And then artificial intelligence provides the information directly in Excel – without any copy and pasting! In this video you’ll learn how to setup an API connection from Microsoft Excel to Open AI’s ChatGPT (GPT-3) by using Office Scripts. As a bonus I’ll show you how you can parse the result if the answer from GPT-3 is in more than 1 line. This makes it easier to use the information in Excel.
By way of a few paragraphs inserted into the recently enacted 4,000-page 2023 National Defense Authorization Act, Congress mandated that state and local governments prepare their annual financial statements in a standardized format that is electronically searchable. The provision effectively drags state and local governments kicking and screaming into the 20th century, if not the 21st.
As worthy an accomplishment as this appears to be, it was resisted mightily by the state and local government financial community. Most prominently, they argue, the measure can potentially result in a major transfer of accounting and reporting regulatory authority from states to the federal government, thereby undercutting what many consider a fundamental principle of federalism. Moreover, state and local officials see it as one more costly unfunded mandate imposed upon their governments.
The opposition by state and local governments is understandable. But they have no one to blame but themselves. To this day they are wedded to a technological past. In a perverted way, they may be getting their just desserts. The act requires them to do little more than what they should have done years ago on their own for the benefit of their investors and other stakeholders.
Implicit in the act is that governments will have to prepare their financial statements using XBRL (eXtensible Business Reporting Language) or some comparable reporting framework. This is the format that the Securities and Exchange Commission, which would be charged with implementing the new provision, currently demands corporations use in their financial filings. XBRL requires all entities to classify each of the elements of their financial statements (e.g., assets, liability, revenues and expenses) by identical rules and in machine readable form.
This paper is an introduction to AI technology designed for actuaries to understand how the technology works, the potential risks it could introduce, and how to mitigate risks. The author focuses on data bias as it is one of the main concerns of facial recognition technology. This research project was jointly sponsored by the Diversity Equity and Inclusion Research and the Actuarial Innovation and Technology Strategic Research Programs
There are many books about spreadsheets out there. Most of these books will tell you things like “How to save a file” and “How to make a graph” and “How to compute the present value of a stream of cashflows” and “How to use conjoint analysis to figure out which features you should add to the next version of your company’s widgets in order to impress senior management and get a promotion and receive a pay raise so you can purchase a bigger boat than your neighbor has.”
This book isn’t about any of those. Instead, it’s about how to Think Spreadsheet. What does that mean? Well, spreadsheets lend themselves well to solving specific types of problems in specific types of ways. They lend themselves poorly to solving other specific types of problems in other specific types of ways.
Thinking Spreadsheet entails the following:
Understanding how spreadsheets work, what they do well, and what they don’t do well.
Using the spreadsheet’s structure to intelligently organize your data.
Solving problems using techniques that take advantage of the spreadsheet’s strengths.
Building spreadsheets that are easy to understand and difficult to break.
To help you learn how to Think Spreadsheet, I’ve collected a variety of curious and often whimsical examples. Some represent problems you are likely to encounter out in the wild, others problems you’ll never encounter outside of this book. Many of them we’ll solve multiple times. That’s because in each case, the means are more interesting than the ends. You’ll never (I hope) use a spreadsheet to compute all the prime numbers less than 100. But you’ll often (I hope) find useful the techniques we’ll use to compute those prime numbers, and if you’re clever you’ll go away and apply them to all sorts of real-world problems. As with most books of this sort, you’ll really learn the most if you recreate the examples yourself and play around with them, and I strongly encourage you to do so.
Author(s): Joel Grus
Publication Date: originally in dead-tree form 2010, accessed 29 Oct 2022
Professionalization leads us to an interesting dilemma. Actuarial culture and, for that matter, organizational culture got insurance companies to where they are today. If the culture were not moderately successful, then the company would not still exist. But this is where Prospect theory emerges from the shadows. It is human nature not to want to lose the culture that enabled your success. Many people nonetheless thirst for the gains earned by moving in a new direction. Risk aversion further reinforces the stickiness of culture, especially for risk-averse professions and industries. Drawing from author Tony Robbins, you cannot become who you want to be by staying who you currently are. Our professionalization, coupled with our risk aversion, creates a double whammy. Practices appropriate to prior eras have a propensity to be locked in place. Oh, but it gets worse!
By the nature of transformation and modernization, knowledge and know-how are embedded in the current people, processes and systems. The knowledge and know-how must be migrated from the prior technology to modern technology. Just like your computer’s hard drive gets fragmented, so too do firms’ expertise as people change focus, move jobs or leave companies. The long-dated nature of our promises can severely exacerbate the issue. Human knowledge and know-how are not very compressible, unlike biological seeds and eggs. In a time-consuming defragmenting exercise, information, knowledge and know-how must be painstakingly moved, relearned and adapted for the new system. This transformation requires new practices, further exacerbating the shock to the culture. Oh, but it gets even worse!
The transformation process requires existing teams to change, recombine or communicate in new ways. This means their cultures will potentially clash. Lack of trust and bureaucracy are the most significant frictions to collaboration among networks. The direct evidence of this is when project managers vent that teams x, y and z cannot seem to work together. It is because they do not have a reference system to know how to work together.
Before we get into the different approaches, why should you care about knowing multiple ways to calculate a distribution when we have a perfectly good symbolic formula that tells us the probability exactly?
As we shall soon see, having that formula gives us the illusion that we have the “exact” answer. We actually have to calculate the elements within. If you try calculating the binomial coefficients up front, you will notice they get very large, just as those powers of q get very small. In a system using floating point arithmetic, as Excel does, we may run into trouble with either underflow or overflow. Obviously, I picked a situation that would create just such troubles, by picking a somewhat large number of people and a somewhat low probability of death.
I am making no assumptions as to the specific use of the full distribution being made. It may be that one is attempting to calculate Value at Risk or Conditional Tail Expectation values. It may be that one is constructing stress scenarios. Most of the places where the following approximations fail are areas that are not necessarily of concern to actuaries, in general. In the following I will look at how each approximation behaves, and why one might choose that approach compared to others.
Unfortunately, fraud is rampant in the insurance industry. Property and casualty insurance alone loses about $30 billion to fraud every year, and fraud occurs in nearly 10% of all P&C losses. ML can mitigate this issue by identifying potential claim situations early in the process. Flagging early allows insurers to investigate and correctly identify a fraudulent claim.
5. Claims processing
Claims processing is notoriously arduous and time-consuming. ML technology is a tool to reduce processing costs and time, from the initial claim submission to reviewing coverages. Moreover, ML supports a great customer experience because it allows the insured to check the status of their claim without having to reach out to their broker/adjuster.
It’s time for carriers to innovate rapidly to respond to the buying preferences of members of these generations.
As an example, if the target customers are millennials and Gen Zers who need a simple term life insurance solution, you may want to focus on instant decision underwriting and lower face amounts to meet the most basic needs.
This approach could mean that many historical riders and features are actually not necessary.
It likely also means that the tools used in underwriting need to focus on information that is available instantly as opposed to traditional methods that could take weeks or even months.
But in a court filing Monday, Jonathan Marks, the deputy elections secretary, acknowledged that a fourth county, Butler, had also refused to count those ballots — and that the county had notified the department three weeks before the lawsuit was filed.
Marks apologized to the court for what he described as an oversight resulting from “a manual process” — a spreadsheet — the department had used to track which counties were counting undated ballots. Butler County was misclassified in the spreadsheet, he said, and from that point forward was left out of the state’s campaign to push counties that hadn’t included them.
Today, small commercial insurers can leverage AI/ML data and analytics as part of a holistic solution to transition from manual underwriting workflows that lean on lengthy applications and web research to one where quotes are issued and most policies are bound automatically and (nearly) instantly.
The journey has three broad phases.
Step 1: Prefill: Leveraging an array of data sources, including unstructured data sourced from computer vision algorithms, insurers can prefill application data on small commercial risks using just a business name and address. Human underwriters can then review this information against underwriting guidelines without having to chase down data through web searches or phone calls.
Step 2: Selective automation: Based on risk appetite, certain industry classes can be identified for automated underwriting. In this environment, application data is prefilled and then automatically analyzed against insurer underwriting guidelines to determine acceptance or whether additional information is required.
Step 3: Full-blown automation: As insurers learn from step two, it’s a short leap to step three, which is to fold additional businesses into the automated workflow. Even in a fully automated environment, there are some risk exposures that may trigger manual reviews of submissions. But by leveraging the efficiency gains delivered by application prefill and the automated underwriting of select industry classes, insurers can set themselves up to drive automation across a much wider array of risks than they ever thought possible.
The biggest barrier to entry I’ve found when I’m learning a new language is that small concepts of the language are usually presented outside of any useful context. Most programming language tutorials will start with printing “HELLO, WORLD!” (and this is book is no exception). Usually that’s pretty simple. After that, I usually struggle to write a complete program that will accept some arguments and do something useful.
In this book, I’ll show you many, many examples of programs that do useful things, in the hopes that you can modify these programs to make more programs for your own use.
More than anything, I think you need to practice. It’s like the old joke: “What’s the way to Carnegie Hall? Practice, practice, practice.” These coding challenges are short enough that you could probably finish each in a few hours or days. This is more material than I could work through in a semester-long university-level class, so I imagine the whole book will take you several months. I hope you will solve the problems, then think about them, and then return later to see if you can solve them differently, maybe using a more advanced technique or making them run faster.