Cognine

Categories
Blog

Code First Vs API First – A shift in the philosophy of software development

“Code first” and “API first” are two different approaches to software development, and they have distinct differences and use cases. Here are reasons to consider the “API first” over “Code First” approach:

Reasons to Consider API First:

Clarity and Consistency: Designing the API first helps ensure a clear and consistent interface for your application. This clarity can lead to fewer misunderstandings and mistakes during development.

Collaboration: API-first design allows teams to work in parallel. While the API is being designed, development teams can start implementing their components, leading to faster development cycles.

Documentation: API-first design encourages the creation of thorough and up-to-date API documentation from the beginning, making it easier for developers to understand and use the API.

Ecosystem and Integration: If you plan to make your application accessible to external developers or integrate it with third-party services, a well-designed API is crucial. An API-first approach ensures your API is suitable for external use.

Versioning and Maintenance: A well-designed API makes versioning and maintaining the system easier. It can be less disruptive to make changes or additions to the API without affecting the core application logic.

Reduced Dependencies: API-first can lead to better separation of concerns. It can reduce dependencies between the application logic and the API, making the system more modular and maintainable.

Testing: You can create test mocks for the API before it’s implemented, allowing for early testing of other components that rely on the API.

While the “code first” approach in software development can be effective in certain situations, there are several reasons why it might not always be the best choice. Some of the key reasons include:

Lack of clear requirements: Starting with code before understanding the project requirements thoroughly can lead to a mismatch between the code and the actual needs of the project. This can result in the need for frequent code revisions and changes, which can be time-consuming and costly.

Poor scalability and maintainability: Code that is developed without a clear architectural plan or design can become difficult to scale and maintain as the project grows. This can lead to a complex and unmanageable codebase, making it challenging for developers to make changes and enhancements in the future.

Increased development time: Without a clear plan and design, the development process can become inefficient and time-consuming. Developers may spend more time troubleshooting and fixing issues that arise due to the lack of a structured approach, leading to project delays and increased costs.

Higher risk of errors and bugs: Starting with code first can increase the likelihood of introducing errors and bugs into the software, as there might be a lack of proper planning and testing. This can result in a lower-quality product that requires extensive debugging and testing before it can be considered stable and reliable.

Inefficient use of resources: Developing code without a clear understanding of the project requirements and architecture can lead to the inefficient use of resources, including time, money, and human resources. This can ultimately impact the overall success and profitability of the project.

Considering these drawbacks, it is advisable to follow a structured approach that includes proper planning, requirement analysis, and design before delving into the coding phase. In summary, an “API first” approach is valuable when you want to prioritize a well-defined, consistent, and well-documented API, foster collaboration among development teams, and ensure your application is well-suited for integration with other systems and external developers. However, the choice between “code first” and “API first” should be based on your project’s specific requirements and constraints.

In conclusion, by defining clear API specifications early in the development process, Cognine enabled parallel development, fostering collaboration between frontend and backend teams. This approach aligns with the company’s commitment to delivering high-quality, well-documented APIs. This proactive approach helps create robust and scalable software solutions that meet both client and internal requirements.

Categories
Blog

Can Design, Development, and Product Management Work Simultaneously?

Let us begin with answering to the title of this blog. The answer is ‘yes.’ However, there are some key factors to consider before concluding that design, development, and product management can work simultaneously to benefit the tech industry.

These are three essential pillars in the product creation process, each with its unique perspective and responsibilities. When these three disciplines work together seamlessly, they can create exceptional products that meet user needs and drive business growth.

At Cognine Technologies, this collaboration is not just a buzzword but a way of life. In this blog, we’ll delve into how Cognine Technologies brings together these critical disciplines to create custom cutting-edge products that lead the market.

 1. Establish Clear Communication Channels

Effective communication is the foundation of successful collaboration. Designers, developers, and product managers need to establish clear communication channels to ensure everyone is on the same page. Regular meetings, such as daily stand-ups, design reviews, and sprint planning sessions, can facilitate the exchange of ideas, progress updates, and feedback.

Furthermore, creating a shared digital workspace where team members can collaborate on documents, designs, and project management tools can significantly enhance communication and visibility into each other’s work.

2. Define Roles and Responsibilities

To avoid confusion and duplication of effort, it’s crucial to define clear roles and responsibilities for each team member. Here’s a general breakdown:

  • Designers: Responsible for creating user interfaces, wireframes, and prototypes that align with user needs and the product’s overall vision.
  • Developers: Translate the design concepts into functional code, focusing on scalability, performance, and technical feasibility.
  • Product Managers: Act as the bridge between the development and design teams and are responsible for defining the product strategy, prioritizing features, and ensuring alignment with business goals.

By clearly defining these roles, each team member can focus on their core responsibilities, leading to more efficient collaboration.

3.  Foster a User-Centric Approach

Successful products are those that solve real user problems and provide value. To achieve this, all three teams must adopt a user-centric approach. Product managers should gather user feedback, conduct market research, and define user personas. Designers should create intuitive and user-friendly interfaces, while developers should build features that are not only functional but also align with the user experience.

Regular usability testing and feedback loops should be established to ensure that the product continually evolves to meet user needs and expectations.

4.  Embrace Agile Methodologies

Agile methodologies, such as Scrum or Kanban, promote flexibility, adaptability, and iterative development. These methodologies encourage frequent collaboration and allow teams to respond to changing market conditions and user feedback. Product managers can prioritize features based on user feedback, and developers and designers can adjust their work accordingly during sprint planning and review meetings.

5.  Prioritize Features and Roadmap

Product managers play a critical role in prioritizing features and defining the product roadmap. They should collaborate closely with the design and development teams to ensure that the roadmap aligns with the product’s vision and user needs. Regularly reviewing and adjusting the roadmap based on feedback and market trends is essential for staying agile and competitive.

6.  Encourage Cross-Functional Teams

In some cases, it may be beneficial to organize cross-functional teams, where designers, developers, and product managers work closely together on specific projects. This approach promotes a shared understanding of project goals and fosters a sense of ownership and collaboration among team members.

Conclusion

In conclusion, the simultaneous collaboration of design, development, and product management is not only possible but also pivotal in crafting innovative and user-centric products in the tech industry. Cognine Technologies exemplifies this synergy, ensuring that these three vital disciplines coalesce effectively through clear communication, defined roles, a user-centric approach, agile methodologies, prioritized roadmapping, and the establishment of cross-functional teams.

Categories
Blog Case Study

Enhancing Power Solutions with an Innovative Admin Portal 

In the power solutions industry, efficient administrative tasks are crucial for smooth operations, effective communication, and top-notch customer service. Our client faced challenges with their Admin Portal, which needed to adapt to their diverse products, customer interactions, and dealer relationships. This case study explores the client’s issues and the solutions they used. 

Client Overview:  

The client has been a leader in power solutions. They expanded into various power products and to manage these offerings, they collaborated with Cognine to develop an adaptable Admin Portal. 

Customer Needs: 

As the client grew, they required an integrated solution to handle their products, customer interactions, and dealer connections. They aimed for streamlined administrative tasks like user management, feedback handling, contact management, and communication via a Message Centre. They also wanted to improve the Dealer Portal for better role management. 

Solutions Implemented: Working with Cognine, the client developed a comprehensive Admin Portal: 

  • Global Admin Application: A customized hub managed the company’s operations. 
  • Dealer User Management: A module efficiently handled dealer users, from onboarding to access control. 
  • Feedback Management: A system managed customer feedback, aiding product improvement. 
  • Contact Management: Tools for managing customer contacts ensured effective communication. 
  • Message Centre: Integrated messaging facilitated smooth communication with stakeholders. 
  • Role Management: Dealers could control user access for personalized experiences. 
  • Automated Testing: Key features underwent automated testing, enhancing reliability. 

Technology Stack Used: 

Benefits and Outcomes: The collaboration yielded several benefits: 

  • Improved Efficiency: The Admin Portal streamlined administrative tasks, boosting operational efficiency. 
  • Better User Management: The Dealer User Management module enhanced the dealer user experience. 
  • Customer Insights: The Feedback Management module collected customer feedback for ongoing improvement. 
  • Enhanced Communication: The Message Centre fostered better communication with stakeholders. 
  • Personalized Access: Role management empowered dealers to provide tailored user access. 
  • Increased Reliability: Automated testing improved the Admin Portal’s dependability. 

Conclusion: The collaboration led to an innovative Admin Portal that met diverse needs, streamlined operations, and improved engagement. It showcased the client’s commitment to excellence as they continue leading the power solutions industry. 


Categories
Blog

Change Data Capture (CDC)

Change Data Capture (CDC) with apache kafka, with debezium connectors and confluent schema registry

Traditionally migrating data changes between applications with was implemented real-time or near real-time using APIs developed on source or target with a push or pull mechanism, incremental data transfers using database logs, batch processes with custom scripts etc., These solutions had drawbacks like

  • Source and target system code changes catering to specific requirement
  • Near real-time leading to data loss
  • Performance issues when the data change frequency and/or the volume is high
  • Push or pull mechanism leading to high availability requirement
  • adding multiple target applications would need a larger turnaround time
  • Database specific real-time migration was confined to vendor specific implementation
  • Scalability of the solution was time and cost intensive operation

Change data capture (CDC) refers to the process of identifying and capturing changes made to data in a database and then delivering those changes in real-time to a downstream process or system. Moving data from one application database into another database with a minimal impact on the performance of the applications is the main motto behind this design pattern. It is a perfect solution for modern cloud architectures since it is a highly efficient way to move data across a wide area network. And, since it’s moving data in real-time, it also supports real-time analytics and data science.

In most of the scenarios CDC is used to capture changes to data and take an action based on that change. The change to data is usually one of insert, update or delete. The corresponding action usually is supposed to occur in the target system in response to the change that was made in the source system. Some use cases include:

  • Moving data changes from OLTP to OLAP in real time
  • Consolidating audit logs
  • Tracking data changes of specific objects to be fed into target SQL or NoSQL databases

Overview:

In the following example we would be using CDC between source and target PostgreSQL instances using Debezium connector on Apache Kafka with a Confluent schema registry to migrate the schema changes onto the target database. We would be using docker containers to setup the environment.

Now, let us setup the docker containers to perform CDC operations. In this article we would be focusing only on insert and update operations.

Docker Containers:

1. In a Windows or Linux machine, install Docker and create a docker-compose.yml file with the following configuration.

2. Run the docker-compose.yml file by navigating to the directory in which the file was created using command prompt or a terminal and run the below command.

Debezium plugin configuration:

1. Once the docker containers are created, we need to copy Debezium kafka connect jar files into the plugins folder using the below command.

2. Restart the kafka-connect container after the copy command is executed.

Database configuration:

  1. Connect to the postgres-source and postgres-target databases using psql or pgAdmin tool.
  2. Create a database named testdb in both the servers.
  3. Create a sample table in both the databases.

Ex: create table test(uuid serial primary key, name text);

  1. In postgres-source database execute the below command to change WAL_LEVEL to logical.

Alter system set wal_level=’logical’;

  1. Restart the postgres-source docker container using the docker stop and start commands.

Source Connector:

Using any of the REST client tools, like Postman, send a POST request to the following endpoint with the below mentioned body to create the source connector.

Endpoint: http://localhost:8083/connectors

In case of multiple tables, include the comma separated tables list in table.include.list property.

Sink Connector:

Using any of the REST client tools, like Postman, send a POST request to the following endpoint with the below mentioned body to create the sink connector.

Endpoint: http://localhost:8083/connectors

Note: 172.18.0.5 is the local IP address of postgres-target database which can be obtained using docker inspect command. Replace the IP address with the one from target container.

Testing the connectors:

Once the connectors are created, insert or update the records in the table(s) of the source database. Check the records in the target database.

In order to debug, check the logs of kafka-connect container using docker logs command. Ex: docker logs –f 4a

To delete the created connectors run the following API with DELETE method chosen as the request method.

http://localhost:8083/connectors/test-sink-schema-connector

Related Blog

Categories
Blog

Is real-time data, the future?

The transition

In recent years, there has been an explosion of interest in big data and data-intensive computing. Along with this, there has been a corresponding increase in the use of real-time data processing systems. Real-time data processing systems are those that process data as it is generated, rather than waiting for all of the data to be collected before processing it. This article discusses the opportunities and challenges associated with real-time data processing. Before moving to the real- data, let’s look at some facts:

A few fun facts about data

2.5 quintillion bytes of data are being created every day and less than 0.5 % is used

Cloud is no more able to handle the pressure created by big data and old data storage

Data seems to have a shelf life

Bad data can cost businesses more than $3.5 trillion per year

Structured data helps in better decision-making in businesses

The time to download all data takes about 181 million years

Now that the fun facts were surprising to most of you, let’s look at the actual trends, case studies, and challenges further.

How can organizations choose and adapt to the dynamically changing Data culture?

As per the statistics, in the future, more than 50% of the data is collected or created, analyzed, and stored outside the cloud system. An organization can always start by analyzing its needs and planning the architecture that generates what they are looking for in the future. The increased usage of real-time data is being adopted by an array of industries including but not limited to Banking and finance, Retail, Healthcare, and more industries such as advertising and marketing are poised to adopt this year.

Enterprise data management involves the processing of data that involves various activities associated with processing, checking quality, accuracy, security, etc. The data says Enterprises are self-inhibited due to a lack of data availability as and when required in the form that is required to access and understand easily. Not only this has affected their capabilities but also paralyzes their agility and operational abilities. 

Benefits for an organization:

The real-time benefits are real quick. A few of them are listed below.

  • Increased operational efficiency
  • Quicker automated intelligent decision making
  • An enterprise that can project accurate data metrics
  • Helps in every aspect of the enterprise including their products, sales, strategizing, finance, etc.

The future

There has been a sharp decline in the retail market in the spending of the consumers according to statistics. How is Real-Time data impacting these industries in changing their habits and bringing them back to the usual patterns of shopping? Most retailers are now working on combining real-time data with #ai to give real-time information to the consumer and help in changing the buyer’s mindset in rushing to buy the product. 

When you see this, what do you think it is

‘Only 1 left in stock

It means data and AI are working shoulder to shoulder, which is beyond amazing. This had been the real innovation that created an urgency in a consumer’s mind to get that last item in the stock. 

Not only retail, but another example is also the healthcare sector. A classic example is healthcare devices or devices that monitor your health/heart rate. Another massive sector that uses Real-time data is the Financial sector.

Now, having said all the above, although real-time data is very useful and works like a magic wand, there are certain limitations and challenges when the ‘processing time’

A few but Real Challenges

Although there are a few challenges in Real-time data projects, there are strategic and effective solutions that can make the entire real-time data processing process smooth. A few challenges and solutions are listed below in this article

1. Quality

The data quality defines the output of the reports for example in the case of financial projection and business analytics. Not all architectures designed and developed can provide the best quality when it comes to real-time data. An organization needs to be extremely careful while collecting, filtering, and strategizing data.

2. Collection disruptions- data formats

When organizations use IoT- Internet of things with their own data formats, it becomes very confusing to these devices, especially with data coming from different sources and multiple formats. This leads to data disruptions due to interactions caused by firmware updates or APIs

A quick solution to this can be addressed using batch data processing before the pipelines are created for real-time data.

3. Bad architecture

The important part is designing the architecture. If the architecture designed does not give the right results or does not fulfill the requirements of the organization, it is useless and any business can get into losses when the data is not accurate.

Using a hybrid system with a mix of OLTP- online collection and storing data and OAP- online analytical processing for batch data processing using carefully designed strategic data pipelines helps with building a good architecture and data loss. So everything links back to architecture.

How can we fix this or start with Real-Time data? You can either hire a bunch of data scientists to perform these tasks and build the entire department for change

Or

Save all the headaches and heartache by booking a consultation with us, plan your journey at quite a cost-effective data processing model right for you at https://cognine.com/contact-us/

It’s the people behind the technology that matters.

Related Blog

Categories
Blog

Digital transformation in the utilities sector

While trends in the power and utilities business models have been exempt from massive overhauls for a while, the influence of technological innovations is slowly starting to change this factor. Due to this, digitization’s visionary approach to technology lays the groundwork for new capabilities that are expected to accelerate exponential growth in pressure-pumping market value for decades to come.

What is digital transformation in the power sector?

Rapid transition to clean power is helping businesses reduce their carbon footprints and many are also racing to catch up with the rising sophistication of digital technologies. Companies must ensure that they have the right technology in place to support a digital transformation strategy and leverage it across their entire operations. The power industry is leading the way in this direction, with renewable power at the center of many providers’ strategies. 

This translates into a newer, more diverse set of assets to manage and integrate with the rapidly ageing set of existing assets worth about $1.2 trillion. In addition to online tools that automate business processes, programmable automation (PA) and cognitive computing are quickly entering the mainstream as firms realise these technologies can help them reduce costs and improve performance. But moving from traditional asset management strategies to develop comprehensive digital strategies with robust data governance and cybersecurity at its core will mean going beyond software alone: 

The need of the moment: Companies need more robust analytics capabilities that can support key business processes such as fuel planning, risk management, maintenance scheduling, and asset monitoring as well as infrastructure planning for new projects

Digital paradigm shifts in power transformation:

The power and utilities industry today faces a host of challenges that are making the business more difficult to succeed in. On the one hand, high oil prices, a growing population, and widespread use of electricity are making it imperative that businesses adapt to reflect market demands. Meanwhile, industry professionals lack the agility and resilience to effectively deal with these new demands.

At this point in time, what’s more important is how your business will be better able to adapt to changing conditions and capitalise on opportunities as they emerge.

Let us look at some digital paradigm shifts that are game changers in the vertical:

1. Industry 4.0:

Industry 4.0 has revolutionised the way manufacturing operations are run. Industry 4.0 is the transition from a manufacturing operation to an intelligent operation. Data-driven, automated, and ever-improving technology will make factories leaner, more efficient, and profitable.

Today, there is more opportunity than ever before in this industry as digital twins and IoT devices bring us ever closer to fully autonomous systems fuelled by massive amounts of data. This gives more power to utility companies when it comes to gaining a competitive advantage over their peers. However, cyber security becomes even more important through such innovations.

2. Compliance:

The industry is facing an uphill battle in order to meet new emission regulations and sustainability targets. Oil and gas companies are being tasked with implementing smarter, more efficient operations. With the heat on for future-ready leaders, getting creative with available technology solutions will be key in our efforts to meet air quality goals.

3. Changing industry needs:

The shift toward alternative power sources has not only affected the business but has also opened new opportunities for companies in the industry. It has shown the customers what is needed and how their lives will be improved. The shift towards clean power has created new revenue opportunities for utility companies. Customer accessibility and sustainability have driven the development of IoT devices, which allow power companies to connect with customers and influence behavior. In addition, IoT devices are helping reduce the environmental impact that extraction creates.

4. Digital asset management:

Digital innovation is playing a key role in the transformation of asset management in the generation segment. As the industry moves toward digitization in the sector, digital asset management can play a key role in the transformation of asset management, helping reduce O&M costs, boost reliability and profitability, and lowering greenhouse gas (GHG) emissions. 

5. Data management:

We are now entering the era of smart grids. Companies are facing the challenge of collecting, analysing, and acting on all their valuable data. Connected devices help them to collect and analyse data from multiple sources and make it accessible to other departments of an enterprise. Infrastructure set in place that communicates with data centres provides information on electricity consumption; sensors enable real-time monitoring of facilities and assets on the premises; IoT-based solutions offer comprehensive insight into the performance of plants, equipment, and networks as well as usage patterns at every stage of the supply chain.

Wrapping up…

The power industry is open to a great number of digital transformations, the most important of which is the fact that it allows companies to open new horizons of digital opportunities and reach a new level of global economic growth, and effective operation. 

With the transition to digitization underway, IT architecture is playing an increasingly important role in ensuring that generation and consumption interact smoothly. Not only does the utility sector need to find new ways to manage their business, but they also have a duty to inform their consumers about new technologies and service offerings so they can make informed choices.

Related Blog

Categories
Blog

Intelligent Automation is the way of tomorrow. This can be a huge success if done correctly.

Intelligent Automation is the way of tomorrow. This can be a huge success if done correctly. The key factors are the right people, process, and technology combined with an appropriate strategy. This intelligent application of machines helps to work smarter on tasks that humans could do themselves or hire outside contractors.

Automation is a great way to save time and resources while improving your business. If you’re looking to automate your business, these 5 steps will help get it done:

1. Start with a business strategy

Automation tools should be your first stop when looking to automate processes. There are many different technologies that might help you achieve business goals and therefore, it’s important not just to look at just one strategy without considering other alternatives. They each have their own strengths, weaknesses, benefits or costs – so take some time here!

IT needs to understand the problems that it has in order to buy tools. The business strategy is key to buying the right tools for your business to grow. Before making any purchases, C-level executives need to understand their company’s goals and how they hope to achieve them using technology; otherwise it might seem like there was no plan at all behind what you purchased!

2. Focus on technology drivers

IT leaders have a major decision to make about automation. The degree of priority that you assign will vary depending on its ability for success in the long term. Any technology strategy should not just advance the goals of your business but do so in a way that supports leadership’s long-term vision.

3. Determine architecture

The IT leader must first define the environment required to achieve desired goals within constraints. They should use guideposts or guardrails of technical principles as criteria for success, which might involve many technologies and processes in order to describe an ideal future-state architecture that relies on a variety staff from various teams throughout your organization

The architecture should reflect any constraints on specific components- such as that they must be in the cloud or cannot be. Make sure that the design of your architecture reflects all constraints to avoid future headaches.

4. Mind the gaps; build the roadmaps

The next step is to evaluate what’s already in place and make sure it matches up with our desired automation end state. This includes identifying areas they need to address:

For example requirement of any new technology, processes, training or identifying if existing ones can be used differently.

At this step IT should develop the roadmap, taking into consideration the different factors that affect change, like automation strategy.

5. Strategy Implementation

IT should follow a consistent evaluation and selection process in doing so. The suitable automation technology ranges from tools to support more mature enterprise-grade scripting all the way up to low code platforms or even AI-powered robotic process automation. IT should decide what it wants for its company after careful thought about how much work needs to be done now vs future projects.

While the strategy likely requires changes in processes, team structures, and staffing to implement it successfully – IT needs to work within existing HR frameworks for accomplishing these tasks. Additionally, training will be necessary so that employees can easily learn how their new tools operate with minimal confusion or risk of error when utilizing them at work

Take Away

Automate your way to a more streamlined and efficient operation. You can start by implementing automation technologies or finding opportunities in the workplace, such as automating certain processes that seem inefficiently executed but would benefit from the advances nonetheless!

Let technology take care of some tasks so that you can focus on more important things. Let us take care of the technology at Cognine

Related Blog

Categories
Blog

Intelligent Automation – Future of RPA

Adoption of emerging technologies across industries is rising at a break neck speed. Besides digital transformation, organizations are pushing into digital optimization initiatives like Machine Learning, AI and automation to become more competitive, resilient and efficient.

Robotic Process Automation (RPA) has been one of the most successful and widely adopted automation tools. According to the latest forecast from Gartner, Global RPA software revenue is projected to reach $1.89 billion in 2021, an increase of 19.5% from 2020.

Over the rest of the article, we will focus on how we help enterprises with:

1. Facilitating RPA implementation

Over time, having worked with various clients across the industries, one of the most important imperatives for successful RPA implementations has been management buy in.

Getting started with RPA

In this step, you will lay the foundation needed for successful RPA implementation. At the end of this phase, you would have a completed a pilot to show the benefits of implementation.

Opportunity discovery

We work closely with your teams to identify gaps, savings potential and ROI as compared to the peers and industry benchmarks. This includes data collation, workshops with your team and value stream mapping. Study data collated to confirm on the opportunities identified.

Platform selection

Once you have identified the opportunities for automation, the next step is to pilot the process. Having worked with leading automation software providers across the ecosystem, we help you identify the right tool for automating the identified opportunities. This includes considerations ranging from no code/low code platforms to cutting edge automation using computer vision, NLP and AI.

POC execution

Build automations, technical flows quickly for the identified automation opportunities. Collect the results, evaluate the feedback and build score cards to measure long term success and focus on creating an opportunity pipeline.

Scaling across the enterprise

To scale off on the back of successful pilots, organisations need a team responsible for opportunity pipeline creation, automation governance, process assessment and enterprise-wide support. This team ensures efficient usage of RPA resources, increased integration/access to new technologies with in the enterprise and increase in the throughput capacity.

We work with you to build teams/capabilities to support continuous improvement, identify cross enterprise opportunities, help you reengineer the processes and track the results after deployment, to ensure you realize the full potential of your automation effort.

2. Challenges and strategic navigation

While RPA adoption has been gaining significant attention in some industries, there have been a plenty of failure stories too, exceeding the implementation time, cost and overall ROI. According to Gartner, “By 2021, 50% of RPA implementations will fail to deliver a sustainable ROI.”

Below are some of the most common challenges you will likely face if you and your company choose to implement RPA.

a. Process Issues

It is recommended that you map your automation journey, identify gaps across various departments and saving potential before you set out on your automation journey. While most enterprise’s successfully implement pilots, they lack a clear opportunity pipeline to scale the efforts.

Tasks that are repetitive, rules-based, high volume, and that do not require human judgement are the ideal candidates for automation using RPA. This can include activities involving moving files/folders, copy & paste data, scrape data from web, connect to the API’s, extract and process structured and semi-structured content from documents, PDFs, emails and forms. RPA implementation might be difficult with the process that are non-standardized and require significant human intervention.

Redefining business process for efficient use of bot’s time or modifying the business process itself might speed up implementation. For example, it might prove to be efficient, if you could get all the data first, feed it into the application and then call the next flow instead of calling the next flow after every single data entry point.

It is easier to reach automation levels off 70-80% for most applications and the remaining 20% might require significant investment of time and cost due to the complexity, throwing out the whole purpose of automation. Hence it is crucial to make a cutoff between desirable level of automation versus efficient level.

b. Organizational pitfalls

From getting the management buy-in, it is important to rally support from IT department to successfully execute RPA projects. IT department plays a crucial role in speeding the RPA implementations with resource allocation, exposing API’s or even building certain custom scripts over components. Some of the other IT support functions that play a key role include RDP access, network stability, bot run context and issue resolution time.

c. Technical Issues

It is advisable to choose low code/no code RPA solution over some the outdated solutions available in the market. It is easier for your internal teams to adopt or transition later, should you work with an outsourced service provider to develop the initial components. It also helps you keep the development costs under control.

Some of the other best practices include:
  • Initialising certain applications before hand
  • Implementing best practices like modularity, re-usability and efficient looping into the code
  • Securing credentials using orchestrator

d. Post implementation adoption

Scalability, Maintenance and de-commissioning the process are the three most important post implementation challenges. We have covered scalability in the earlier part of the article (RPA centre of excellence is most important to address scalability). Changes in business processes or applications require the components to be modified. Since most the bots are programmed using best practices it is relatively easier to re-configure and change as per changing business needs. As the process evolves over a period of time with the changing business needs, we should also be able to analyse when to de-commission a certain process based on the complexity and effort it takes to maintain and bot’s run time.

3. Future of RPA

RPA market is expected to grow at double digit rates through 2024 as per predictions from Gartner. Here are the trends that are expected to shape up the RPA market in the short term.

Non-IT Buyers/Low Code and No code Platforms

The RPA adoption is over 90% in certain industries and most of these revenues has been coming from the IT buyers. Over the course of the time, business buyers are expected to drive the revenue growth for RPA players given the complex business landscapes and for simple reason there aren’t enough programmers around the world to meet these demands. By 2024, half the new RPA revenue is expected from non-IT buyers.

While the existing pure play RPA leaders like Ui Path, Blue Prism & Automation Anywhere are working on simplifying their platforms, some of the tech giants like SAP, Salesforce, Oracle, ServiceNow, Google and AWS are focusing on low code RPA platforms. There also has been some of the innovative starts ups that are already making some big success stories in the no code RPA space.

Cognitive Automation

Leveraging NLP, AI and ML with RPA enables enterprises to expand the scope of process it can automate. While some the tools provide these features by default, some of them may require custom coding or installing certain plugins from the RPA market place.

Process Modelling Automation

One of the top priorities for RPA research is auto-extraction of process knowledge from logs and videos. The workflow creation and process definition acts as a bottleneck in the creation of opportunity pipeline and is manually intensive. Automating the process modelling can speed up RPA implementation and deliver substantial ROI.

Related Blog

Categories
Blog

Are your Values Aligned?

One of my professors, during my days at Northwestern, asked if my personal values match with my company’s values. The point was that if there is a mismatch in values, maybe it is best to part ways with the company.

I have since started looking at values of different companies and almost all of them seem like good values. Why would a company choose “Bad Customer Service” or “Employees Last” or “Dishonesty to the Core” or “Less Quality” as their value?

Even more, what percent of employees know their company’s values? Probably a small number (I’d like to know if there’s a precise number from a reliable study). Assuming a vast majority of employees don’t know their company’s values, why even bother defining values? Are setting values on minds of founders of start-ups.

On a more practical note, I think vision, mission, and values should be at the fingertips of leadership team. Their every action should reflect company’s values and should encourage all employees to align to the same. What if employees don’t align to the same? What if employees’ personal values aren’t aligned with your company’s values? How do you identify them? How do you deal with them?

Related Blog

Categories
Blog

Being Prepared

I went trekking couple of months ago in the Himalayas, and we all know that the air gets thinner as we gain altitude and our body can’t operate at same efficiency. Apart from lower oxygen levels, trekking includes physical activities that we don’t usually make part our fitness routines.

I was chatting with our trek leader, as we were walking uphill on rugged path, on how one could prepare better for these kind of treks (as a lot of us were struggling with every step we take.) He bluntly said “One can never be prepared for these kind of activities. You just get here and do it.” I was taken back by his statement.

We spend a lot of time planning and preparing for a project. Past experience, skills, training, discipline, etc. are what I see as preparing to execute a project well. Does that guarantee success? Maybe that’s what the trek leader meant when he said that once can never be (100%) prepared.

If that’s the case, how much preparation is needed?

Experts suggests a workout program to be “prepared” for the trek. Every project manager brings in a team with right skills, training, and experience to be “prepared” for the project. Every trek in different. Every project is different. Apart from the evident attributes (for trek, these would be altitude, number of days, sleeping conditions, etc. and for project, these would be budget, timelines, scope, etc.) there are several other attributes that may not allow us to be 100% ready.

A sudden snowstorm during a trek can throw all plans out of the window. A team member with key business knowledge being unavailable unexpectedly may put the project in jeopardy. Sudden changes in exchange rates because of political instability can ruin the budget plans on a global project. There could be several reasons why things can go out of track.

In spite of “one can never be 100% prepared”, we still need to prepare with the information we have in hand and anticipate some of the challenges (risk management). 80% preparation will give us the drive and motivation to push it to 100% during execution. A mere 20-30% preparation will not give us a chance to make it over the hill.

With enough training, our bodies adjust very quickly to lower oxygen levels and our lungs will be able to work efficiently in thin air as well. Similarly, we can get our teams can adjust quickly and get the project back on track if we plan and prepare well.

A well prepared individual/team can face those risks/challenges and execute the trek/project, in spite of unexpected challenges.

A well prepared individual/team can face those risks/challenges and execute the trek/project, in spite of unexpected challenges.

While we may never be 100% prepared, give it your best shot (genuinely) at training/preparing and executing and the chances of succeeding will be high.

Related Blog