Business Automation within Finance, Operations and HR

Improving Reporting for Better Decision Making

2022-05-30

Corporate reporting processes have remained critical, but existing bottlenecks have certainly seen a lot more stress, as both reporting frequencies increase, and as stakeholders request additional levels of detail to existing reports or even new ones. Why does friction exist at all, where does it appear, and what other upsides are there regarding the use of latest technologies that will also help with any new emerging directives such as BEPS 2.0.

 

Friction within reporting is typically around accessing multiple data points that may be within one or many applications. The irony is that the information is often already in systems, but actually accessing, presenting, or even deriving it from existing data points can be frustratingly challenging. At the end of the day it is about achieving the best data quality possible to drive deep timely insights for decision making, and all against a backdrop of strong compliance.

Five things to reflect upon to draw out where corporates face challenges and why there is so much friction that drives domain functional time away from management reviews to transactional processing:-

 

Point 1. Where are the reporting bottlenecks? They can actually be anywhere, not just at the end point, regardless of reporting focus. Thinking through information flows for decision support, controls  or management reporting sees different types of challenges eg  waiting for report pack submissions from operating entities but which one; execution of data transformations; finding that piece of relevant information pre and post final preparation in the currency being discussed; a multitude of detailed operational and compliance driven reviews; re-familiarisation of spreadsheet cell linkages etc etc, all of these and others which need to be completed before the more detailed management reviews.

 

Point 2. Where is the required data? Core data sets typically exist or can be derived, but the crux of the challenge is that relevant detailed data points are not only just within one application but potentially across multiple subsystems. Subsequently this has typically led to spreadsheets being used as the only conduit for achieving required data access and transformation.

Any use of spreadsheets is highly inefficient for multiple reasons i.e. achieving consistent data integrity, and remembering all cell interdependencies and worksheet relationships etc. Even more than that now, corporations recognize these weaknesses and either accept the use of spreadsheets or do not want to see them within core procedures at all, noting that the execution of both scenarios is possible.

 

Point 3. Why are reports often not reviewed in detail? Multiple reports are often continuously produced by entities, and if the truth be told many recipients do not get enough meaningful information from them mainly due to time constraints, the voluminous nature of reporting packs, and the lack of any immediate relevant detail for those items that are of interest. Unused reports are usually well known to all but their repeated production is rarely cancelled, despite the fact that all go through monthly preparation and checking.

 

Point 4. Why is information often replicated within other domain teams? Typically data silos exist, meaning that the same information is often being duplicated to some extent without strong controls either on compliance, or on the shadow IT costs to make it all happen. Removing these silos is not easy, often political, and requires an empowered driver with management oversight to make it happen. In essence, this is both a technological as well as organisational issue (more below).

 

Point 5. Do your reporting and workflow processes work for you or vice versa? This is about each end to end activity being repeatable and auditable and reports being executed as required to drive an end result from which decisions can take place.

 

How can technology solve this problem?

Technologies have only recently reached a tipping point for end users to knock out friction, and the two root causes of it can now be solved as multiple technologies come together. The first, is the ability to undertake ultra-granular transformations, and the second is the practical compute power at a specific point in time to execute them i.e. software architecture design.

The macro end result of this is the ability for the process owner to define a fully or partially automated business flow from data collection, through all required data transformations / enrichments to contextual actionable reporting + workflows @anywhere @anytime + payments + simulations. This can be particularly useful for businesses wishing to automate more complex environments, noting that some call it Composable Application Architecture.

The end user benefits from simple operational steps as the underlying data flow interactions have been defined by the process owner, not to mention the fact that you are potentially leveraging existing ERP systems not replacing them to extend protection of past investments.

The micro result is far reaching, as processes can be iteratively enhanced to deploy bottom up +  top down AI, or IoT (ESG), as required, as they are designed at a lower granular level.

 

Where can you deploy digital processes?

Deploying smart business flows that operate within the end to end structure, as defined above, can resolve both your complex processes that touch many employees (e.g. Budgets, FP&A) as well as your complex tasks that are undertaken by a few (Consolidation, BEPS 2.0, IFRS 16 / ASC 842), but there is a deeper opportunity to tackle other pain points.

As we move forwards broadly consider how, within the following examples, the outlined operational concepts might be applied to work within your organisation, irrespective of domain focus, to remove frictional constraints and drive value.

 

Example 1: FinTech. Smart compliant processes to reduce friction to both external and internal stakeholders. In essence, this means reaching out to potential customers, securely collecting their PII data, and performing real time credit checks with third party specialist ecosystem agencies to evaluate risk before giving acceptance. Connectivity for this is achieved using API’s. Accounting entries, including amortisations, can be automatically enriched (see later) and exploded over the life of the contract for ongoing monitoring. For regional rollouts subtle changes can be made to the standard process to meet legislative requirements e.g. that relate to local regulatory contract acceptance and onboarding rules.

 

Applicants can pay over the life of the contract using a variety of methods. Corporates can proactively monitor these ongoing activities 24/7 at contractual level or more holistically at aggregate / segment level, with smart actionable contextual reports and workflows.

At a holistic level, the terms and conditions of the initial offer for new applicants can be refined based on deeper demographic analysis, so in effect the process becomes continually fine-tuned over time.

Real time escalations within the corporate can drive corrective actions for those contracts which are going off the rails i.e. non-payment(s), providing tight controls and ongoing transparency of operational steps to the corporate. All business flow steps at macro level including calculation methodologies are transparent to the process owner i.e. no operational black box.

 

Example 2: IFRS 16 / ASC 842 Leases. This is about extracting deeper value from leases and also being able to understand different accounting treatments i.e. reconciliations between them.

At a more detailed level easier operational insights come from being able to undertake FX currency reconciliations by segment, within the balance sheet. https://gaiapm.com.hk/real-estate-automation-digitization-of-leases-to-understand-the-financial-impact/

 

Example 3: Consolidation / BEPS 2.0. End to end segment analysis for onward decision making and thinking about the longer term implications re impact considerations for BEPS 2.0 in advance of it being a requirement. An important facilitator here relates to data and process enrichment for ease of referring back to source data, which is explored further below.

 

Example 4: Procurement & Sourcing. “Within budget behaviour” needs to be driven by the system through tightly managing available budgets, particularly when amounts are approved but not yet consumed. Part of this is being able to route actionable contextual workflows for authorisation @anywhere @anytime differently based on $ amounts, type of purchase, departments etc etc and this further underpins the driving of compliance, segregation of duties etc.

 

Procurement of capex items can leverage data enrichment capabilities for tighter management (more below). There are many examples here; building modifications being required to get a new or replacement asset installed e.g. specialist hospital machines, factory line robots etc; whether the asset has special implications on insurance coverage; considering implications associated with ESG that might involve IoT tracking.

Actionable contextual workflows can drive the underlying behaviour, rather than disparate unconnected processes being needed or even worse forgotten. Alternatively, and more simply, it might be ensuring that checks and balances are in place eg a budget version that cannot be sent up the line to HQ,  before local sign off by all relevant parties etc.

 

Example 5: Digital HR. Functionally HR covers everyone in an organisation, and there are many areas where employees and managers can be empowered i.e. self-service @anytime @anywhere; compliance i.e. procurement, T&E expenses, health and wellbeing initiatives, surveys etc.  The list can be long, but the same end to end macro process structure described herein can handle the multiple data types required within both qualitative and quantitative processes.

 

What follows are some observations and comments as to where many corporates are today:-

 

Digital laggard?

Are you the only one struggling to make all of your processes more efficient? Realities are that you are far from being alone as one simply cannot change many years of annual improvements overnight.

 

Where do you start?

Knowing this can be problematic at the beginning due to the complexity and underlying reality that so many areas are not standalone, but are to some extent interlinked within and across other functional teams, but of course you need to break in to start somewhere.

Worthy of mention is that many digital projects across the globe fail due to project teams not having representation from other stakeholders that are “touched on” within the same process, as well as not thinking about systems integration dependencies when latest digital systems meet legacy.

The high profile projects that succeed and you read about, like Fintech / Insurtech and Retail, do so because relevant stakeholders are aligned with the same focused objective, even though they might touch different functional areas.

Expanding on that for retail, one can typically pinpoint a great UX for end users but behind the scenes, at least at the moment, many have not progressed to focus on end to end integrations within a transaction lifecycle ie for a retailer to receive, at say product manager level, actionable contextual information to “instantly” start decision making per line item to resolve supply or demand issues.

 

Considerations for vendor selection

End to end process designs encompass both new and old technologies. From an employee’s perspective they want to be able to execute and engage anywhere, anytime from any device. Equally important is that from a corporate perspective this can only be done if execution is under all relevant corporate policies and best practice for cybersecurity and privacy including GDPR etc. All have to coexist, so as always think them all through and re-evaluate during execution, keeping a watchful eye on the logical, legal, physical and increasingly taxable location of data.

Domain expertise of a vendor is critical because they have a working knowledge and understanding of the critical pain points that exist within the sectors in which they operate. Also they will have optimised their solutions to overcome typical issues faced during deployment.

 

How long will digitisation take?

Theoretically, in a perfect dystopian world, your end point will be full blown automation across a corporate ie real time data flows with various constant compliance checks, but being realistic this is not a practical possibility for the majority from where they are today, so this is going to be a much longer journey over many years. However, this is a generalisation and it can be achieved for prioritised critical areas that add value.

For execution it is advisable to have in place a champion user to drive changes through to their natural conclusion, as it can be a painstaking task. CFO.com identified that having a champion was a key differentiator when comparing very diverse efficiency metrics + costs per transaction across multiple businesses within the same business sector.

 

Can you reuse processes?

Reusing designs might seem like a strange thing to say, but it is very appropriate as changes to accommodate local needs are easier to execute when working at a granular level. Often processes are to some extent replicated using multiple vendors’ systems for the “same” tasks around the world.

Another way to look at the examples above is that they can provide the operational glue between systems to drive improvements. For example, consider the automation of monthly reporting packs that are prepared and flow between entities, but where there is a lot of work at both ends to make this happen in terms of preparing the pack, and also processing it upon receipt. Save time here and it replicates in each entity to free even more time month after month. Not only that, but some processes can be deployed to sub entities which is useful when staff experience in individual areas does not exist at that point in time.

 

What processes can be tackled?

Quantitative as well as qualitative can both be done for an end to end process. Qualitative is more about non $ based transactions e.g. appraisals, ongoing training, employee activities, wellbeing initiatives etc etc, but which still require deployment execution with levels of control.

Demarcation lines between the two are obvious, but within any deployment being able to handle both types is an advantage, as typically vendors do one or other but rarely both. As mentioned above, multiple data types might need to be handled here.

 

Is this RPA?

RPA cf end to end processtech (i.e. as defined above: from data collection, through all required data transformations / enrichments to contextual actionable reporting + workflows @anywhere @anytime + payments + simulations) is worthy of mention at this point as there is a lot of industry and market confusion. RPA lends itself to simple repeatable processes like moving data from one application to another or onboarding docs in what is typically augmentation i.e. 85% efficiency but which 15% is wrong and is often seen in larger multi staffed single departments, in for example financial services.

RPA fits into the data collection part of processtech, as it is defined above, and one might also hear the word hyperautomation. The latter refers to activities after the data collection onboarding phase. Over time this demarcation lines will become more fuzzy.

Once you are working at a granular level to define a business flow, then incorporating bottom up or top down AI becomes an option, but before starting on this it is very important to have an AI outcome in mind to measure actual success against the expectation set.

 

Enrichment of data

Enrichment of data comes into play because this can occur during any ultra-granular process. For tiered allocations it might be to add allocation calculation details, so at least when an entry is identified there is more information as to the transaction source. Another example might be the tagging of the source entity during consolidations from which a transaction belongs.

 

X-Application and X-Ecosystem

Touched on by inference with FinTech above, connectivity driven by API integrations are a major enabler for ensuring pervasive digital transaction flows, both within and across applications and ecosystems for increased productivity. Key areas become integrated, smarter, faster to execute and connected. Often in the press, cloud is typically used in isolation to differentiate against legacy, but this is a deployment option as digital is more about removing friction and about API connectivity, and depending on circumstances cloud might be the preferred option.

 

Full or partial automations

Automations can be full or partial as required. Recognising that some domain staff might proactively drive process development, one might expect to see that the early UI for internal use only has more visual checks and balances than the end result.

This approach helps drive confidence that the result is right, e.g. correct data source. Some might just want an experienced eyeball sign off which is not about biometric recognition, but an eyeball review where an experienced member of staff can quickly see an obvious transactional issue, all based on past experiences.

For some, augmentation just for the process owner might be beneficial. This is on-screen messaging that is not on the printed version of a report that identifies some data outliers where reported numbers are clearly out of whack, with say past operational trends or run rates.

 

Focus on the end result

When looking at the commentary above there are seemingly a multitude of different angles for the process owner to consider, but it is important to only leverage the areas that drive value for you, noting that changes can be made at a later date. Also when it comes to data related compliance issues, business flows can be controlled as required.

 

Dynamic agility

Resultant processes are as a result deeply functional and are not bloated with excessive unused functionality, and consequently are faster to execute, especially when combined with other recent more detailed technical changes like compression technologies. They are also easier to maintain, more secure, less buggy as overall less code has been used for design execution, thereby providing you both flexibility and agility which are key in today’s fluid business environment. In other words you can adapt, add, remove steps and involve less, more or different people, or change technical “environments” as circumstances require.

 

Converting time from transactional process to management activities is easy to say and hard to execute. Whilst great strides have been made over the years, recent technological changes have come together now to solve the two core reasons of friction, these being ultra-granular transformation and the practical compute throughput to make this happen. Priority setting of overall objectives is important as the journey is a long one, but improvements can be seen fast, especially where the same ones can be replicated within different entities. Be mindful of inter-departmental processes and whether to fight that battle, noting that although there is often deep value to be released, actual success in these cases is a different weighted combination of technologies, change management and project team representation. Lastly, think through technical dependencies especially when digital meets legacy. A game changer if done right!! 

 

FlexSystem is a financial, human resources, and operations business software vendor to 1 in 10 Forbes Global 2000 (May 2020), and 1 in 5 Global Fortune 500 (August 2020), operating at the intersection of new digital process and payment technologies, whether on-premise or cloud, to provide you with iterative opportunities for value creation.

 

Leading the Digital Transformation Through Innovation

  • 繁中
  • 簡中