Auditing Standard 5: How Now, Brown Cow?By Francine • Oct 3rd, 2009 • Category: Audit Quality, Latest, Liability Caps, Pure Content, The Case Against The Auditors
On September 24, 2009 the Public Company Accounting Oversight Board issued a report on the first year of implementation of Auditing Standard No. 5, An Audit of Internal Control Over Financial Reporting That Is Integrated with An Audit of Financial Statements (AS No. 5).
The report is based on PCAOB inspections that examined portions of approximately 250 audits of internal control over financial reporting (ICFR) by the eight largest domestic registered audit firms in 2007 and 2008. In the Spring of 2008, the SEC sent their dog, the PCAOB, to hunt firms who were not implementing Auditing Standard 5 in their clients.
Some regulatory background is necessary to understand the tangle of standards auditors must address in the age of the PCAOB. Although I’m typically hard on the firms, I do sympathize with the sheer volume (and lack of timely, definitive effective dates) of pronouncements, drafts, practice alerts, special projects, and requests for comment that auditors of public companies must contend with. It’s one of the reasons why I wince every time I see the long list of firms registered with the PCAOB. Most of them are not auditing more than 100 public companies, get inspected by the PCAOB only every three years, and undoubtedly have a hell of a time maintaining the necessary technical expertise, experience, and research capabilities necessary to audit one public company let alone a few or any that are reasonably complex.
In March 2006, the Accounting Standards Board (part of the AICPA) issued a suite of eight risk assessment standards as part of a joint project with the International Audit and Assurance Standards Board (IAASB) to update and align their risk assessment requirements. The PCAOB’s interim auditing standards consist of generally accepted auditing standards, as described in the ASB’s Statement of Auditing Standards No. 95, as in existence on April 16, 2003, to the extent not superseded or amended by the Board. As a result, the ASB’s 2006 risk assessment standards are not included in the PCAOB’s interim standards.
I know. Makes me dizzy.
On October 21, 2008, the PCAOB proposed changing its auditing standards related to the auditor’s assessment of, and response to, risk. In general, the PCAOB’s proposed standards are consistent with the ASB’s proposed standards. Auditing Standard 5, implemented on June 12, 2007, was intended to improve implementation of the provisions of the Sarbanes-Oxley Act of 2002 relating to audits of internal control over financial reporting (ICFR).
This month’s PCAOB report on Auditing Standard 5 says:
“Risk assessment underlies the entire AS No. 5 audit process, including the identification of significant accounts and disclosures and relevant assertions, the selection of controls to test, and the determination of the audit evidence necessary for a given control…the inspectors’ review of the firms’ implementation of AS No. 5 focused on the following areas:
• Risk Assessment,
• Risk of Fraud,
• Using the Work of Others,
• Entity-Level Controls,
• Nature, Timing, and Extent of Controls Testing, and
• Evaluating and Communicating Deficiencies.”
I’m going to summarize the findings of the PCAOB report and highlight a few areas that impact the business of the firms. Get a cup of coffee and make yourself comfortable. We’ll be here a while.
In general, the report tries to balance each mention of areas for improvement or deficiencies with a pat on the back and a reassuring hug first. The PCAOB tries to convince us that most of the auditors performed as expected. However, we’ve come to expect the soft touch from a regulatory body made up of former audit firm professionals. The PCAOB is also currently missing quite a few leaders. More importantly, given the range of firms and the number of audits inspected, there’s something negative to say about each of the components reviewed.
Some findings are particularly interesting in light of the financial crisis and the failures of the audit firms to prevent, mitigate, or warn shareholders and other stakeholders of the risks these companies were facing and the catastrophic result of ignoring or hiding them either through negligence or fraud.
Probably the most important area for review was the auditors’ success at risk assessment. The PCAOB inspectors found instances where the auditors failed to adequately assess risk for certain relevant aspects of the audit.
“The auditors failed to identify certain components of an account or certain locations in a multi-location environment that presented different risks of material misstatement of the financial statements than other components of the same account or other locations, respectively. They also failed to evaluate both the qualitative and quantitative factors when determining whether to perform tests of controls at a location…”
Why? There’s an unhealthy emphasis on numbers in the auditing process – strict materiality guidelines, sample sizes, and rigid formulas applied to every aspect of the methodology and audit approach. Less emphasis is placed on qualitative factors or judgment. It’s not easy to apply judgment or use experience and industry, global or general economic knowledge when you’re a 24-year-old second-year associate and you’ve never met the engagement partner or any subject matter expert.
Qualitative factors are so much harder to document as evidence, too. How do you scan and tickmark 20 years of experience and many times over seeing the same tricks pulled by management? Longer, more detailed analyses of strange data, reports detailing the history of a foreign location and higher fraud risk, doubts about a new system implementation, uncertainty and tension due to multiple management changes, repeated previous internal audit deficiencies are boring and suck up precious budget hours. No time. No inclination.
The PCAOB inspectors also report that auditors are having a hard time identifying all of management’s relevant assertions when developing their risk assessment. Relevant financial statement assertions are those having meaningful bearing on transactions, accounts, and disclosures. Why is it important that an auditor gets them right? They are the fundamental basis for defining scope of activities (and budget).
An auditor should perform substantive procedures (tests) for all relevant assertions related to each material class of transactions, account balances, and disclosures. That means not neglecting to test the most important assertions even if controls appear strong. Even seemingly effective controls can be compromised due to the constant threat of management override and the inherent limitations of internal controls. In the end, the auditor’s assessment of risk is judgmental. It can never be sufficiently precise to identify all material risks of misstatement so more important assertions with larger risk of misstatement must be tested, and that takes time and money as well as competence and objectivity.
The PCAOB also mentions that there is a general lack of focus on IT throughout the risk assessment process. They later raise additional issues related to IT risk and controls when using the work of others and when determining the timing, nature and extent of testing.
I’ve written about the IT audit issue for the Big 4 many times before:
- Lack of sufficient technical staff at clients as well as on audit teams,
- Pressure on fees,
- Dominance of the financial audit practice in engagement leadership,
- Organizational challenges for the IT audit and risk teams within the audit firms,
- Lack of involvement of most CIOs in Sarbanes-Oxley work and the external audit except by default, and
- Tendency to use inquiry and observation rather than substantive procedures for IT testing.
All of these issues contribute to the failure of audit teams to consider the effects of deficiencies in pervasive controls such as information technology general controls on the risk assessment. Therefore, the scope of tests that are conducted is limited for all the wrong reasons.
Risk of fraud
Fraud is one of the most contentious issues I write about. It’s happening more and more and in bigger and bigger ways and, yet, the audit firms standard public relations position (and legal defense) is, “We are not responsible for finding fraud.” The “We were duped” defense is probably the most professionally embarassing, irresponsible, and self-serving excuse the auditors use for abdicating responsibility for their public duty to shareholders. I’ll keep pushing it because it is their duty to both identify risk of fraud and adjust their audit scope and methodology accordingly in order to mitigate this risk and uncover fraud whenever possible.
The PCAOB, SEC and AICPA support this contention, even if the firms use their lawyers to avoid their responsibility whenever possible. In this report, the PCAOB again tells us that the nature, timing, and extent of auditors’ tests of controls were, in some cases, “not sufficiently responsive to an identified fraud risk because auditors either failed to alter the extent of testing in areas of greater risk, or they failed to identify and test compensating controls when the controls identified and tested did not completely address the identified risk.
“The inspectors also observed instances where auditors either did not evaluate all the relevant processes of the company’s period-end financial reporting process or did not appropriately test the design or operating effectiveness of controls to address the risk of management override.”
Using the Work of Others
When it was originally adopted, Auditing Standard 5 was intended, in the PCAOB’s own words, to focus the auditor on the matters “most important” to internal control, eliminate “unnecessary procedures”, simplify and shorten the standard by “reducing detail and specificity”, and make the audit more scalable for smaller and less complex companies.
“Considering and Using the Work of Others” superseded AU sec. 322 and the direction currently contained in AS No. 2 regarding using the work of others.
High auditor (and other fees) required to meet Sarbanes-Oxley requirements was the major complaint of the business community. Most fees paid to audit firms increased substantially after Sarbanes-Oxley, with the justification by the audit firms being the additional work required by AS No. 2. AS No. 5 was intended to reduce the extra work and therefore the higher fees of the Big 4 audit firms attributed to Sarbanes-Oxley.
I predicted that the audit firms would use this new provision to take on more work themselves, skirting the independence rules in order to put more of the Sarbanes-Oxley work into the internal control review bucket under the external audit. A lot of companies spent a lot of time and money trying to get the work done internally and expecting the Big 4 firms to eventually accept it and mitigate their effort and fees. But, until recently, the firms just wouldn’t do it.
Under AS No. 2 the audit firms interpreted the standards very strictly. They continued to do so for as long as possible under AS No 5. This was partly due to their desire to do as much work and reap as much of the fee as legally possible, the “share of the wallet” concept. This was also due to concerns about liability. Finally, in early 2008, their clients, pressured themselves by the economic downturn, took the upper hand. The auditors’ ability to fight the scope reduction demands was limited by economic reality and losses of clients due to failures, takeovers, and the bailouts. They started to cut their overbloated rolls and then were hard pressed to push for more given staffing constraints, especially in the IT audit arena.
As a result, this report describes several instances where PCAOB inspectors saw further opportunities for the auditors to use the work of others when the assessed level of risk was lower, including when testing certain system reports and application controls. These audit firms were probably trying to hold on to as much as they could.
In other instances, probably in larger issuers, the inspectors observed instances “where the extent of the auditor’s use of the work of others to reduce the auditor’s own work was greater than was appropriate under AS No. 5 considering the level of risk associated with the control being tested (e.g., in the area of controls over journal entries, which generally would be considered higher risk because of the risk of management override or other risk of fraud).”
Given the presence of other Big 4 firms as internal audit co-sourcers and pressure from clients to use the work of their finance and internal auditors to cut costs, I expect the auditors became reluctant to question qualifications of internal staff or professionals from another Big 4 firm. Inspectors observed certain instances where auditors, “performed few or no procedures to assess the competence of the others relative to the task being performed, or they did not adequately assess the objectivity of the others, particularly where the work was performed by company personnel other than internal auditors.”
Finally, “get it done and get it over with” type pressures probably resulted in limited discernment of who or what to retest and how much. The inspectors also “observed numerous instances where the extent of the auditors’ retesting of the work of others was seemingly unrelated to the risks involved (e.g., a uniform approach to retesting of 20 percent of the controls tested). “
The control environment sets the tone of an organization and has a huge inﬂuence on the control consciousness of its people. It is the foundation for all other components of internal control, providing discipline and structure to all activities.
“In evaluating the design of the entity’s control environment, the auditor should consider the following elements and how they have been incorporated into the entity’s processes:
a. Communication and enforcement of integrity and ethical values.
b. Commitment to competence.
d. Management’s philosophy and operating style.
e. Organizational structure.
f. Assignment of authority and responsibility.
g. Human resource policies and practices.
For example, management’s response to internal control deficiencies communicated in prior periods may relate to one or more of the aforementioned elements, such as commitment to competence or management’s philosophy and operating style…The auditor should obtain sufﬁcient knowledge of the control environment to understand the attitudes, awareness, and actions of those charged with governance concerning the entity’s internal control and its importance in achieving reliable ﬁnancial reporting. In understanding the control environment, the auditor should concentrate on the implementation of controls because controls may be established but not acted upon. “
Unfortunately, the PCAOB inspectors found that in some instances:
“…auditors did not evaluate entity-level controls beyond those associated with the control environment and the period-end financial reporting process. (Inspectors were told in certain cases that the auditors did not evaluate other entity-level controls because the issuer had not done so.)
Some auditors identified entity-level controls that appeared to be designed to operate with a high degree of precision, but failed to obtain sufficient audit evidence of their operating effectiveness. There also were instances where the auditors identified and tested entity-level controls and found them to be designed and operating with a high degree of precision, but did not alter their tests of process-level controls in response to that assessment.
There also were situations where auditors inappropriately reduced their testing of process-level controls based on reliance on entity-level controls. In certain of these instances, the auditors failed to consider the precision with which the entity-level control addressed a relevant financial statement assertion. In other instances, the auditors determined that the entity-level control was not operating at a level of precision sufficient to address the risk related to a relevant financial statement assertion, yet they nonetheless reduced the testing of the process-level controls for the relevant assertion.”
It looks like there was generally a “precision” problem. Control precision describes the alignment or correlation between a particular control procedure and a given control objective or risk. A control with direct impact on the achievement of an objective (or mitigation of a risk) is said to be more precise than one with indirect impact on the objective or risk. The inability to discern level of precision and the impact on testing seems to be a training issue and, perhaps, a direct result of the level of professional assigned to perform these high level, critical assessments. The subtlety of the “precision” component is lost on too junior or inexperienced professionals.
Nature, Timing, and Extent of Controls Testing
After completion of the risk, fraud, and controls assessment activities, including identifying relevant assertions and determinations of the precision with which entity-level controls are operating, it’s time to test. Unfortunately, as I have seen in engagements I have been involved in, the actual testing often doesn’t correlate well with the voluminous spreadsheets and matrices developed in the earlier phases intended to tell professionals what to test, how to test, and to what extent to test certain controls.
How does this happen? Inadequate engagement management, inadequate supervision and review of work, changes in personnel, poor team communication especially when changes occur, fatigue, and rushing at the end which forces professionals to ignore all instructions, do what they have time/energy/inclination to do, and then retro-justify the decision by fudging the documentation.
The result is PCAOB inspectors noting that “in certain cases, the auditors did not consider the assessed level of risk when selecting controls to be tested, or the controls selected were not designed to address the risk of misstatement to the relevant assertion(s).
In particular for IT general controls, segregation of duties controls, and application controls over authorization, disconnects between what auditors intended to do and what and how the tests actually get done occur quite frequently, in my experience. The PCAOB inspectors observed situations where auditors “failed to test a relevant control appropriately or, in some cases, at all. For example, inspectors observed instances where the auditors’ testing of controls over financially significant applications was dependent on appropriate segregation of duties, but the auditors did not test to determine whether appropriate segregation of duties existed.”
Similarly, in some instances:
“…the auditors tested certain controls without testing the system-generated data on which the tested controls depended; the auditors did not test controls over applications that processed financially significant transactions, including important manual spreadsheets; or the auditors observed evidence of review and approval controls (e.g. management signoff evidencing review and approval) without testing the design or operating effectiveness of management’s controls.”
“In some instances, the auditors did not obtain service auditors’ reports (SAS 70) related to controls at outside service organizations, or the auditors failed to perform procedures related to the necessary user controls identified in the service auditors’ reports.”
I have written before about the SAS 70 issue and the fact that it has been neglected in most Sarbanes-Oxley initiatives given the number of companies that have little or no centralized control over this process. It’s one of those things, like disaster recovery and business continuity, that hardly anyone does well, an “elephant in the room” type vulnerability for most companies.
“Inspectors also observed instances where the evidence gathered by the auditor was insufficient to support a conclusion that the controls were operating effectively, yet the audit team relied on the supposed effectiveness of those controls to reduce the scope of other audit procedures. For example, inspectors noted instances where the operating effectiveness of higher-risk controls was tested solely through inquiry and observation, which are tests that ordinarily produce less audit evidence than other tests, such as inspection of relevant documentation or re-performance of a control. In other instances, auditors did not test the completeness of the population from which items were selected for testing. Inspectors also observed instances where the extent of audit procedures was similar for both lower- and higher-risk controls.”
Evaluation of Deficiencies
Finally, when all the work is done, the auditors count up the various deficiencies, categorize them, group them, and combine them to determine if one or several become significant deficiencies or material weaknesess. The categorization of a control deficiency depends on whether there is a reasonable possibility that a company’s controls will fail to prevent or detect a misstatement and the magnitude of the potential misstatement. This exercise is the source of much discussion, first within an issuer’s own Sarbanes-Oxley team, separately within the external auditors team based on their work, then in joint meetings where the two compare their lists, discuss discrepancies, and resolve disagreements via negotiation and horse-trading to come up with the agreed upon list of deficiencies and their severity.
Inspectors observed some instances, that auditors, “inappropriately based their conclusions about the severity of control deficiencies solely on the materiality of the identified errors in the financial statements. Also, some auditors failed to consider relevant risk factors when evaluating the severity of identified control deficiencies. In addition, there were instances where the auditors did not consider whether certain control deficiencies identified through using the work of others, in combination with other identified control deficiencies, constituted a material weakness in controls.”
In some instances, “the compensating controls that the auditors identified and tested were not sufficiently precise or did not operate effectively to mitigate the risks associated with the identified deficiencies. In addition, the inspectors observed that “certain auditors’ required communications of identified control deficiencies to management or the audit committee were incomplete.”
When push comes to shove, the auditors are always in a position of deference to the Audit Committee and/or client management. If their incompetence, inexperience, and unwillingness to challenge technical and industry practices doesn’t do them in, their lack of objectivity and close relationship with management in service to keeping their big fees does. Granted, in rare instances, auditors do resign or let themselves be fired because of standing their ground. But by that time the situation is so bad, their potential liability so great, it scares the hell out of both sides and bodes ill for the shareholder. Ironically, it’s the next auditor, and there’s always one willing to step in even in the messiest of situations, that typically forces the client to do what the other may have been trying to do for a while.
Is audit firm rotation, like auditor signatures on reports and greater transparency of audit firm financials the way to force more much needed audit firm and auditor accountability right now?