Golang

Golang : A Hype or the Future?

Created by Robert Griesemer, Rob Pike and Ken Thompson for Google, GoLang was reportedly built by developers when they were waiting for the code compilation to complete in a project. The three main capabilities they certainly sought-after were the ease of coding, efficient code-compilation and efficient execution. Bringing all these capabilities in one language is what made Go so special.

Go is an open-source, procedural, statically-typed, compiled, and general-purpose programming language. The compiler was originally written in C but is now written in Go as well, which keeps the language self-hosted. the language has seen a lot of success in the last couple of years. A large portion of the modern cloud, networking, and DevOps software is written in Go, eg. Docker, Kubernetes, Terraform. Go is also being used by many companies for general-purpose development.

As the requirement for performance keeps growing, hardware continues to get sophisticated over time, the manufacturers keep on adding cores to the system to keep up with the ever-increasing demand. To handle such an increasing number of cores the system needs to maintain database connections through microservices, manage the queues and maintain caches. This is why today’s hardware requires a programming language that can support concurrency better, and that can scale up performance with the increase of added cores over time.

……………………………………………………………………………………………………

Golang versus Other Languages

Languages are compared predominantly on two factors 

  1. Ease of programming
  2. Efficiency

These factors are usually inversely proportional, meaning that any language that has high “ease of programming” usually has low “efficiency” and vice versa. Go holds the sweet spot with great efficiency and adequate “ease of programming”.

Why “Golang” and not “C++”

Go and C++ are both compiled languages and both have similar speed(In fact, C++ is a bit faster than Go). But the garbage collector in Go is what sets it apart from C and C++.

Any complex program makes use of dynamically allocated memory, and this allocated memory needs to be freed when it is not required. If that is not done, the program would eventually use up all the available memory and crash. In the case of servers, it is absolutely required to do that as it is expected to run indefinitely. In C and C++, the user is forced to deallocate all the memory that they have allocated themselves or use a third party garbage collector. In the case of Go, all the dynamically allocated memory with no reference is automatically garbage collected, making it a lot easier to work with.

Concurrency and “Golang”

Many programming languages weren’t natively designed for concurrent programming. They lack proper design for concurrent execution, and so, they often slow down the pace of programming, compiling and execution. This is where Go comes as the most viable option to support a multithreading environment and concurrency both.

Eg: Language like python only allows one thread to access the interpreter at a time and essentially never runs two threads in parallel. So running two processor-heavy tasks at a time in python in separate threads would have the same if not worse result than running them in the same thread.

The Go scheduler unlike schedulers in other languages manages the concurrency fully by itself. It doesn’t map Goroutines to OS-level threads directly. Instead, it re-uses a few OS-level threads. This reduces the context switching delay significantly as most of the context switching is done at application-level and not Kernel level.

What is the call stack and why is it important in threads?

In case of threads, stack is used to store function return pointers (where to return the result of a function call) and static variables. Goroutine has a dynamic growable call stack which starts from only 4KB and can go up to the total memory capacity. Other languages, in contrast, have static stack sizes, usually of 1MB (determined by the OS).

This growable stack in Go is one of the key things which makes Goroutines light-weight. Where other languages can only have a few concurrent blocks, Go can have a lot more because of its dynamic stack size.

……………………………………………………………………………………………………

Conclusion

Taking into account all the offerings from Golang, we notice that it distinguishes considerably in terms of

  •  Faster compilation and execution
  •  Better code readability and documentation
  •  Offering a thoroughly consistent language
  •  Easy versioning
  •  Allowing development with multiple languages
  •  Allowing easier maintenance of dependencies

These features surely make Golang a contender for the next-generation programming language.

……………………………………………………………………………………………………

If you have any queries in this field, talk to Mindfire Solutions. For over 20+ years now, we have been the preferred Software Development Partner of over 1000+ Small and Medium-sized enterprises across the globe.

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
Swift 5 Vs ObjC

Is Swift the Objective Choice now?

‘Swift Vs Objective-C’– It is one of the first Google searches every iOS developer does before beginning their journey into the world of app development. At a broader level, choosing between Objective-C and Swift is also one of the fundamental and crucial decisions every business makes before beginning any iOS app development work.

So if the question is Swift or Objective-C? The answer cannot be in binary. If you have an existing application already written in Objective-C, then you can weigh the benefits of switching over to Swift vs sticking to Objective-C. However, if you are planning a new app, then Swift should be your default choice.

Why so? Well, read on to know…

……………………………………………………………………………………………………

The Story so far

Apple launched a new programming language called Swift in WWDC in October 2014. It came as a surprise to every developer as it was intended to replace Objective-C as the main programming language on Apple’s platforms, which by all means was stable, proven and had been around for more than two decades, powering millions of apps.

The goal was far-sighted. Swift was designed to be safer, faster, and easier to maintain. Though initially built for Apple platforms, it was aimed to be able to support all platforms. Before becoming Open Source, Swift was designed ground up by Apple using decades of Objective-C experience adding a modern touch derived from the latest programming trends and good practices. It was designed to have all the goodness of a modern-day programming language. Though a descendant of Objective-C, it is fundamentally different in terms of design, syntax, programming style and memory management.

But replacing a decades’ long programming language with a new one cannot be an overnight affair. There were thousands of libraries and hundreds of frameworks already written and working with Objective-C, as they were supposed to. Rewriting them using an infant language did not seem logical. Thus, Objective-C runtime continues to access Apple platform frameworks like UIKit, WatchKit, and AppKit. And Swift has the capability to interface seamlessly and work on top of it.

From the very beginning, Swift is fully compatible with Objective-C, as it should be. Both languages can still co-exist on all Apple platforms. And Apple isn’t likely to change this in the foreseeable future unless it has any strong reason to do that.

Support for interactive programming using Playground enables developers to test their idea live without building and running applications.

In terms of programming capabilities and flexibility, Swift has a lot to offer. Its functional programming style, and strongly typed language makes it impossible to have run time crashes resulting from out-of-bound or type-related issues. It has features like closures, tuples, generics, Structs and enums supporting methods, extensions and protocols, computed properties, powerful extensions, and the list just goes on…

Design-wise factors such as safety, readability, code size, less error-prone, efficient and fast iteration over collections, and other platform support make Swift fundamentally better than Objective-C.

……………………………………………………………………………………………………

Why Objective-C then?

Despite being so much powerful, Swift lacked just one thing that triggered Swift vs Objective-C debate, and that is ‘Maturity’. In the earlier years, deciding between Swift and Objective-C was like choosing between a fledgling with a lot of promise and a veteran with proven credentials.

Those who had rushed to develop production apps using Swift version 1 & 2, had to refactor the whole codebase, or just rewrite it again. It wasn’t matured, evolving rapidly, and syntaxes were completely changing in the early iterations. Hence, it was difficult to maintain Swift Apps compared to Objective-C, which was matured, trusted and possessed a huge developer base.

However, after Swift3, syntaxes became relatively stable and some minor refactoring that was needed was taken care of by the Xcode itself. And then Swift4 seemed to be more stable in terms of design and syntaxes, but, it still lacked ABI stability. Then came Swift 5.

 ……………………………………………………………………………………………………

What makes Swift 5 different?

So far, every version of Swift has been better than earlier. But what makes Swift5 so special is ABI stability.

Starting with version 4.2, Swift codes from one version have been compatible with another. However, the application binary, which can be considered as the machine level code for the sake of this argument, wasn’t compatible with that from a different version of Swift. That is, Swift wasn’t ABI stable until recently before version 5 was launched.

With Swift now being ABI stable for all Apple platforms like iOS, WatchOS, macOS and tvOS, all future versions of Swift including Swift5 will be compatible with each other at the binary level. True that Swift will continue to evolve in future releases, but the application written in the current version of Swift will no longer need to be refactored or rewritten to be able to support future versions of OS. In fact, libraries written now will seamlessly coexist and communicate at the binary level with code written in future versions of Swift and vice versa. And the reduction in app size is the immediate benefit it provides to the users now.

……………………………………………………………………………………………………

Conclusion

True Objective-C is here to stay. There are millions of applications already running using this. But, it isn’t getting any major updates, most of the updates are just to make it compatible with Swift. As a language, Swift is way superior. And above all, developers with expertise in Objective-C and practicing it will dwindle in years to come.

……………………………………………………………………………………………………

If you have any queries in this field, talk to Mindfire Solutions. For over 20+ years now, we have been the preferred Software Development Partner of over 1000+ Small and Medium-sized enterprises across the globe.

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
The Impact of AR on Retail

The Impact of Augmented Reality on Retail

Augmented Reality (AR) is a technology that allows overlaying digital content like images, videos and 3D objects onto the real world, thereby give the illusion of being a part of it. One of the most famous examples of AR is Pokémon Go, which overlays a virtual Pokémon (a 3D cartoon character) onto the real world. AR also offers tremendous possibilities outside of the Gaming Industry, especially in Retail.

The adage “Customer Is God” is a golden rule. It isn’t surprising, therefore, that any business that solves its customer’s problems effectively, gets rewarded with the customer’s loyalty, money, and trust. AR is fast becoming an invaluable tool in the hands of Retail businesses that aim to constantly impress their customer base and stay ahead of the competition.

……………………………………………………………………………………………………

Let’s look at some of the issues that concern the customers of this industry.

Customer Problems

With Online Retail:

  • High Time Consumption – Let’s say you order cloth from an e-commerce website. Typically it will take a few days for the product to reach you. You then gauge it on all the parameters that matter – size, color, texture, etc. If the product does not meet your expectations, you are likely to exchange the item, triggering the cycle to repeat.
  • Return Costs – If the business doesn’t bear the shipment cost of returns, customers are likely to pay for it.
  • Problems with large items – It requires a very vivid imagination to see how a new couch would look in a room. Will it look good with the rest of the furniture? Or will it even fit in the first place?

With Offline Retail:

  • Too much work – It takes a lot of time and energy going around dozens of stores and looking for the right items and then trying various permutations and combinations to check if they look well together.

With both Online and Offline Retail:

  • Un-try-able Products – Some products can’t be tried on. For example, It’s hard to imagine how a particular hair color would look on you, or if that dragon tattoo would be too much for you to carry.
  • Un-personalized Shopping Experience – Currently the preferences of a customer are unknown to the business. Consequently, the suggestions given to the customers are un-personalized and work on a hit-and-trial basis.
  • Hygiene Issues – Whether you have germophobia or otherwise, there is always a risk of contracting a disease owing to the dress being tried on earlier by an infected person.
  • It ain’t fun – Going from store to store, from one website and mobile app to another and trying on or imagining how every product would look like on you is an exhausting experience and is not fun for most. And in online retail, even after so much effort, one can never know if the product will turn out to be expected.

……………………………………………………………………………………………………

Let’s look at some of the issues that concern the businesses of this industry.

Business Problems

With Online Retail:

  • Shipment Costs – The trade-off between bearing the shipment costs of product-returns vis-a-vis making the customers pay for it is a tough choice for any business.

Doing Offline Retail:

  • Compensation for salespeople – Since the whole process is very manual, from the salesperson showing the items to the customer to making the sale, a constant involvement is needed.

Problems common to Online and Offline Retail:

  • Conversion Rates – Due to a lack of personalized suggestions and ads, and a tiring shopping experience, conversion rates of businesses are lower than they can be.
  • Brand Awareness – Extensive marketing is needed for businesses to create awareness of their brands, and it is invariably a very expensive matter.
  • Customer Acquisition – Customer Acquisition costs eat up a big portion of a business’s profits. These are mainly un-targeted advertisements having low conversion rates.

……………………………………………………………………………………………………

How AR helps solve these problems

Try And Buy Functionality:  AR can overlay any item onto the real world to make it seem like it is a part of it. Powerful Machine Learning (ML) algorithms can detect the face and body of a person in an image or even in real-time. An application using a combination of AR and ML can allow its users to try on a virtual version of any item they would like to buy, from the comfort of their houses. Another possible feature is the placement of virtual 3D models of furniture inside a user’s house. Such features will reduce the number of returns the buyer makes which helps save time and reduce the return.

Saves User’s Time and Energy:  as now they have the whole inventory of products available to them and they can try anything on with a click rather than manually trying every item on.

Eliminate Hygiene-related Problems: Trying items in this new way is much efficient and can be made as aesthetically appealing as needed, making the whole process a joyous experience for the user.

Increase Brand Awareness: Users can click a picture of them trying on an item and can share it on social media. This will lead to free marketing and increased brand awareness.

Automated Processes: For offline retail, the need for a salesperson is heavily reduced. A user will enter a fitting room with a screen instead of a mirror and a camera attached to the screen. Users can touch and select their choice of clothing from the screen and can try a virtual version of it instantly. If they like the item they can ask to try the real item on. Hence the need for a salesperson is reduced considerably.

Attracting Customers and Increasing Conversion Rates: A screen mounted with a camera-enabled with AR can show how someone standing in front of the mirror would look like wearing a certain item. Such a setup outside a retail store will attract flocks of customers who after seeing them trying on a virtual item would want to buy it if it looked good.

……………………………………………………………………………………………………

Things To Know Before Introducing AR Into A Business

Accuracy: An AR experience that isn’t accurate will not be useful for the customers or the business. For example, a user won’t like if the sunglasses they’re trying on, fits on their forehead instead of eyes, or if the virtual couch they are trying to place doesn’t rest on the ground properly.

Speed: An AR experience must be fast and lag-free. Long loading time and high latency always drive the user away.

……………………………………………………………………………………………………

If you have any queries in this field, talk to Mindfire Solutions. For over 20+ years now, we have been the preferred Software Development Partner of over 1000+ Small and Medium-sized enterprises across the globe.

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
DevOps

The Impact of DevOps Adoption on Teams

Companies operating in the field of software development have ushered into an era of stiff challenges and expectations, unprecedented till now. Possessing the qualities of agility, accuracy and speed simultaneously are becoming imperative for survival rather than a means of maintaining a competitive edge. Under the circumstances, a DevOps culture provides a flexible, efficient approach of standing up to the demands. It does so by following a model that delivers results leveraging the dependencies that exist between the software development and operations aspects of software delivery. It balances responsibilities more evenly than in a traditional waterfall model, where developers simply turn completed code over to those in charge of operations. DevOps also establishes procedures to ensure that all team members have insights into application performance, which provides benefits such as greater collaboration and engagement between team members.

……………………………………………………………………………………………………

Improved Collaboration

Traditional software development happens in phases. There are teams mapped to each phase and each team is entrusted with the responsibility of playing its part in the successful completion of the phase it is involved in or responsible for.  The result of this approach is that the ownership of a team gets too confined only to the successful execution of the part it deals with. Thus, each team tends to be most concerned with achieving its own objectives instead of meeting the organization’s ultimate business goals. As long as projects get executed successfully, the fissures that exist beneath do not come to the forefront. It is only in the moments of crisis that the lack of synergy becomes apparent and sometimes takes gigantic proportions resulting in the partial or complete derailment of projects.

DevOps neutralizes this possibility completely. The approach requires all team members to be equally dedicated in meeting the broad goals while also focussing on their individual ones. This leads to improving collaboration between people across the development and operations teams. This model eliminates the possibility of working in silos. Members across teams remain fully committed to the software throughout its development life cycle to ensure that their project’s overall goals are met. Accountability for successful delivery lies with all. This compels employees to get more involved in working together.

More Engagement

One of the primary goals of DevOps is to shorten the development life cycle while still delivering software that meets business objectives. The shorter development cycle essentially means a higher frequency of code releases followed by exposing these releases to scrutiny for the detection of bugs within the code, infrastructure and configuration. The pace at which things get done is brisk. There are no slack periods for teams anymore waiting for their phases to begin. All this brings about a high degree of engagement for all the members involved in a project. And it can be intense at times. The results are equally impressive. Industry reports have indicated that the failure rate of organizations with a DevOps culture is 60 times lower than those that don’t.

Higher Efficiency

DevOps uses a workflow that emphasizes on continuous delivery (CD) and continuous integration (CI). The efficiency that gets infused results in software getting delivered faster and with a higher frequency. Automated testing and integration tools are also key elements in DevOps practices. It makes the IT staff more efficient by eliminating the need for them to perform repetitive tasks. Developers no longer need to wait for code integration processes to complete, which can otherwise get quite time-consuming.

DevOps platforms offer opportunities for improving efficiency and increasing the predictability of cloud-based solutions like Azure and Amazon Web Services (AWS). These platforms use a scalable infrastructure to reduce testing and deployment times by increasing available hardware resources during this period. They also provide DevOps as a service, such as Azure DevOps. AWS also provides a set of services specifically intended to help organizations implement DevOps practices.

Exposure & Learning

Employees are generally happier and more productive under the DevOps model, largely because it focuses more on performance than anything else. There are fewer administrative obstacles and greater sharing of risk, which allows individuals to blossom. Members in both development and operations teams prefer DevOps because they get exposed to multiple roles, resulting in their getting a better understanding of project execution and the business as a whole. This experience is more rounded, fulfilling and increases job satisfaction considerably.

Better Results

The improved collaboration between teams and the ensuing efficiency has a direct impact on reducing the time needed to build software. Collaboration encourages a proactive approach amongst team members in putting their act together. All this eventually reduces the time needed to bring a product to market. This benefit is particularly important in competitive markets where the ability to deliver software on time has a direct impact on the revenue and market share. With the DevOps approach not only is the speed looked after but also the quality of the outcome. It also increases customer satisfaction when they receive a comprehensive product sooner than expected, with all the promised benefits delivered quality-wise. To achieve this end goal can be a highly fulfilling experience for all the members involved in giving shape to the software.

……………………………………………………………………………………………………

A DevOps culture improves the collaboration between groups with historically distinct roles, especially people in software development and operations. This practice provides many other benefits that generally result in the faster delivery of software. DevOps practices also improve the engagement of team members by making them responsible for projects throughout their entire life cycle, rather than a specific phase of the project. The increasing availability of tools is making it easier for organizations to implement DevOps practices, allowing team members to automate many of the tasks needed to develop, test and maintain code.

……………………………………………………………………………………………………

If you have any queries in this field, talk to Mindfire Solutions. For over 20+ years now, we have been the preferred Software Development Partner of over 1000+ Small and Medium-sized enterprises across the globe.

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
NLP Image

Impact of NLP on Healthcare Industry

Natural language processing (NLP), along with machine learning, deep learning, computer vision, and image recognition, are all branches of artificial intelligence (AI). The goal of NLP software is to build computer systems that will accept input in the form of spoken or written language and will provide spoken or written output i.e. communicate as if the computer system were a human.

Thanks to devices and applications like Alexa, Siri, Google Assistant and Cortana, much of the world’s population has at least a passing familiarity with NLP. It is being used today to perform a wide range of tasks across many industries. Until recently though, healthcare organizations have lagged behind others in capturing the benefits NLP delivers. However, it’s beginning to catch up.

Here are several use cases for NLP in healthcare that are already enhancing the field. Each of these will contribute to the larger digital transformation of healthcare as technology continues to advance.

……………………………………………………………………………………………………

Medical Coding and Billing

NLP streamlines the way medical coders extract diagnostic, procedural and other clinical information. Rather than a coder reading documents and converting them to alphanumeric codes, NLP reads them and submits the codes to the coder for verification. This allows the human coder to work on documents that NLP cannot process accurately, and reduces the overall expense of coding medical information. In the end, more accurate and thorough coding results in more accurate and timely billing.

Virtual Nursing Assistants

The rise of virtual nursing assistants capable of communicating with patients using NLP is underway. Regular communication between patients and the nursing bot extends care beyond the walls of the clinic room without burdening existing resources. Adherence to the patient’s care plan can be monitored, and triggers can notify providers of issues that need human attention. Patients can receive round-the-clock access to support and answers, including help with medication. Researchers in this field estimate virtual nursing assistants will reduce U.S. healthcare costs $20 billion by 2026.

Robot-Assisted Surgery

Some surgical robots use AI to apply information obtained from prior surgeries to the current case, leading to progressively better outcomes. Beyond the many well-known advantages, robotic surgery delivers, adding an NLP component allows surgeons to query the system and to direct its actions verbally.

Reducing “EHR Burnout”

Recent studies have indicated that healthcare providers spend nearly half of each day updating electronic health records and doing other administrative work, which is a matter of concern. It leaves them with very less time to perform their core functions of examining and discussing clinical, diagnostic and treatment information with patients in a face-to-face environment.

Entering and managing patient information is a major contributor to physician burnout. More than half of physicians surveyed in a Physician’s Foundation 2018 study, reported entering data into the EHR reduces their efficiency and detracts them from their interaction with patients. Systems that use NLP allow physicians to enter notes into the EHR by speaking to it. This saves time versus having to type. Besides, it also allows patients to amend or correct what the doctor is entering into the EHR.

Other Important Use Cases

While improving the clinical value of EHRs and reducing physician burnout is one of the most pressing challenges among healthcare organizations, NLP is contributing to the digital transformation of healthcare in several other ways. For example, NLP is helpful in

While improving the clinical value of EHRs and reducing physician burnout is one of the most pressing challenges among healthcare organizations, NLP is contributing to the digital transformation of healthcare in several other ways. For example, NLP is helpful in

  • Comforting patients who become confused and anxious because they do not understand the data being presented to them through a portal website. For instance, NLP can explain the meaning of abbreviations and medical terminology. Rather than leaving the patient to worry or call the physician to explain the report, NLP can educate and possibly also calm the patient.
  • Offering summarized updates of key ideas, concepts, and conclusions contained in large volumes of clinical notes, journal articles and other narrative texts gives practitioners quick access to volumes of information that would otherwise require a lot of time to read through.
  • Easy extraction of data from free-form text and insertion into fixed-field data files, such as the structured fields in an EHR.
  • Handling a physician’s free-form spoken or text query, which is especially useful for queries that require gathering and organizing data from multiple sources.
  • NLP and other AI components can also accelerate the movement away from fee-for-service models and toward value-based healthcare by organizing unstructured health data derived from EHRs and other sources. Much of that “hidden” big data can shed light on health outcomes for entire populations of patients, which has been impractical until recently.

 

……………………………………………………………………………………………………

If you have any queries in this field, talk to Mindfire Solutions. For over 20+ years now, we have been the preferred Software Development Partner of over 1000+ Small and Medium-sized enterprises across the globe.

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
Web Accessibility

Overlooking Web Accessibility

The Internet is an ever-increasing storehouse of knowledge. The web and the internet as a whole serve as an important resource in many aspects of our lives: education, employment, recreation, commerce and more. Web Accessibility simply means that the web is to be made accessible to everyone. And that includes people with special abilities too – an aspect generally overlooked in haste. 

The concept of Web Accessibility has been around for a decade, but it is unfortunate that true meaning in its entirety has been lost on many of us, the web developers. It’s time that we built ramps to our sites so that it benefits not only people with disabilities but also enhances the experience of all types of users as a whole.

……………………………………………………………………………………………………


Let’s see some examples:  When you are watching a video in a noisy environment and cannot perceive the audio correctly. Without being able to hear the audio, you have to guess what the whole video is about. It can be frustrating, right?


Let’s take another one:  If you have broken your arm in an accident and can’t use the mouse to explore the web. You have to remain cut off from the internet until you recover. Or find ways to manage to access it with difficulty, mostly through pain, or be at the mercy of people who would spare some time to assist you.

There are people out there who face these challenges at every instance of their attempts to access the Web. The true essence of Web Accessibility lies in addressing such concerns and ensuring that the Web is accessible by all, without any discretion. 

The World Wide Web Consortium(W3C) published a set of guidelines, Web Content Accessibility Guidelines or WCAG 1.0 in 1999, as an initiative to the Web Accessibility Initiative(WAI) project. The revised version, WCAG 2.0 was published in 2008, which is more technology-neutral, and therefore, is widely accepted by the developers to make their site more accessible.

It may seem like a huge task to accomplish at first, but in reality, it takes only small steps to make your website accessible to all. Steps that should be undertaken are 

  • Using alternative texts, and descriptions for the images.
  • Adding subtitles and transcripts for videos.
  • Ensuring that your site is fully and equally accessible by the keyboard.
    Making use of the Accessible Rich Internet Tags (ARIA) tags.
  • Having a good color contrast.

These are some tools which can help to make your website more accessible:

So, let’s look at the bigger picture and start taking the necessary steps towards building a platform that is more accessible and more usable, and fulfill our responsibilities as web developers. It’s high time we focused on the masses who might be unable to access the internet just like normal people can. The onus lies with us to take individual responsibility of the same and spread the awareness to others. The realization that mere oversight or negligence on our part can be the source of much trouble for others should guard us against it. 

……………………………………………………………………………………………………

The views and opinions expressed in this article are those of the author. To know more about our company, please click on Mindfire Solutions. 

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
RPA Image

Is Robotic Process Automation changing the Test Automation Game?

RPA has taken the IT world by storm. I won’t say that it is the newest thing in business because it has been there around for about 10 years now. But now is the time when it is spreading like wildfire. More and more companies want to adopt it.

To set the context, let’s look at how AI has broken all the barriers of our imagination. I bet that a few years ago when we used to see robots in movies, we would think that they are just part of our imagination or even if we considered them to become reality someday, we always thought that the day to be many years away. But what we never imagined was that technology will progress so much so soon. Today Artificial intelligence is making machines more and more human-like. They want machines to not only follow our instructions but to think, and possibly also to exhibit emotions. It is almost like humans want to assemble humans in labs. No wonder then that a robot named Sophia was very recently even given citizenship of Saudi Arabia.

Now going back to Robotic Process Automation. Going just by the words,  it seems like  Robots automating processes. The catch, however, is that that when we say Robot, we don’t mean an actual physical robot, but a virtual one –  the automation program.

……………………………………………………………………………………………………

What’s the big deal about RPA in Software Test Automation?

Many could argue that in traditional Automation also we were creating automation programs. So what is all the hype about?

The key distinctions according to me are :

  • It is script less: Coming from a Manual testing background it was always a challenge for me to always keep learning the latest scripting languages but now I don’t need to learn any programming language to automate a test scenario. You just need to be good with your logics and should be able to think out of the box scenarios.
  • Focus is back to product quality: I am not saying that in Traditional automation the product quality was getting hampered, But in my own experience I feel that when I was working on Automation via scripts, most of my time was getting consumed in writing code and at the end of the day, only very few scenarios were automated, and I was not able to cover as much scenario and I didn’t have the time to dive into variety of scenario. So I feel that out of two 1 thing was always getting compromised either the Coverage or the deadline. But with RPA tool this problem of mine has been resolved to a great extent.

……………………………………………………………………………………………………

How things work when Automating Test cases with RPA

There are many RPA tools available in the market like UI Path, Automation Anywhere, Blue Prism and many more. You can use any (FYI: I am not promoting any specific tool).

There are few basic criteria on which all the RPA tools are based on:

  • Already defined user actions: Most of the user actions that one can think of while automating a test case like the click of a button, mouse hover, opening browser, typing into a text box and many more, they are already defined and the user doesn’t have to code.
  • Built-in decision logics and looping statements: The best part about using RPA tool for creating scripts was, I didn’t have to worry about the syntax of my decision logic and adding loops, for example, do-while, for, etc. They are built-in and I just needed to use them.
  • Configuring of user events through Parameters: Every user action or event that we use can be easily configured using its parameters. Almost every property of a user event is made available so as it can be customized as much and as easily as possible.
  • Easy to create variables: RPA tools make it very easy to create a variable without stressing upon the syntax.
  • Error handling: If there is an error that occurs while configuring the user events or in the full flow, RPA tools have very good error handling mechanism. It helps the user to narrow down the area of error and also gives very clear and specific error messages making it very convenient for the user to correct the error.

……………………………………………………………………………………………………

What RPA has changed in my Automation Testing approach?
  • So as I said above, for me RPA has put the focus back to thinking about logic, covering some out of the box test scenarios, rather than spending hours just automating the basic functionality.
  • RPA has improved my testing coverage also, as I am able to cover number and different types of scenarios.
  • Thanks to RPA, I am able to automate the test scenarios faster as I don’t have to write every single line of code on my own.
  • Another advantage of using an RPA tool is, I face less number of errors while automating a test case. As in RPA tool, they have built-in codes for most of the logics and I just have to make sure that I use them in correct flow.
  • With RPA tool, I was able to manage my scripts much easily as most of the times it just making tweaking into the properties of the built-in user actions.
  • Also when I create my scripts using RPA tool, they are more easily readable and I am able to explain them to another person in a better way
  • Being from a manual testing background, it was always a big challenge for me to always keep on learning the latest scripting languages but now I don’t have to invest my time learning the coding languages.
  • I can invest my mind in more decision making tasks rather than some boring and repetitive tasks that can be done by machines also.

Now, when I extract the essence of all the benefits, the bigger picture that I get is RPA helps to increase Test Automation Coverage, reduces the time required, hence reducing in the process the cost of Testing and in turn increasing the Company’s profit.

……………………………………………………………………………………………………

Having said that, I did face some challenges while automating test scripts with RPA tool

1) I was not able to automate everything
Having worked with different scripting automation tools, I feel that there are some scenarios which I can’t achieve using RPA tools like I am not able to automate scenario which deals with complex Database entries, multiple formats of input or unstructured input data.

2) It executes at a slower pace
When I execute a script created via any RPA tool it executes at a UI speed but when I run a script created using scripting languages, it is much faster. So I feel that RPA is comparatively slower than the processes automated using traditional automation.

3) Not much available on Web, so need to explore yourself
Test Automation using RPA is relatively new. So there are many built-in user actions and functionalities of different tools that you have to explore on your own and not much help is available on the web. So that was also one of the challenges that I faced.

4) It can increase a company’s Test Automation cost
When I started automation using RPA, most of the RPA tools that I came across were paid and just 1 or 2 had free versions available and that too had not much-advanced features contrary to the open-source scripting tools available in the market. So I feel that this can also prove to be a disadvantage as it will increase the cost of Automation for a company.

But I feel that the pros somewhere outnumber the cons and the proof of this is that more and more companies are investing in Test Automation using RPA. So in my opinion, if you are the one who wants to adopt RPA in Test Automation, then the time to act is now.

……………………………………………………………………………………………………

The views and opinions expressed in this article are those of the author. To know more about our company, please click on Mindfire Solutions. 

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
AWS-Lambda-Computing

Getting started with AWS Lambda

AWS Lambda is an ingredient in amazon serverless computing. Lambda allows us to run server-side code without thinking about the server. It abstracts all of the other components i.e. servers, platforms, virtual machines, etc. that are needed to run server-side code. So we can just focus on the code and not the server. That way the time to production or deployment becomes very less. We can write a lambda function, configure it and run it in minutes.

Another great benefit of lambda is that we just pay for the compute time we consume. That means it will charge only for the time that our code is actually executed. Also, the first one million requests are free. We have to pay for request thereafter. This a very cost-effective way to run the server-side code. To get started first we need an AWS account. After creating the account we need to go to the AWS management console.

……………………………………………………………………………………………………

Create a Lambda function with Node.js

Let’s create a lambda function that picks a random number between 2 given number. First of all, login into the AWS console and then click the “Lambda”  button under the compute section. Then you can come to the “Select blueprint” section. Then, under the select runtime combo box, select the latest node.js version. Amazon gives you some basic blueprints there. We will just select the simple hello world function to start with.

Image1-Lambda

We will skip to the configure function section to create a new function. We will name our function random-number-generator. Then specify the description. Then the run time that is node 4.3. Our function is a small function so we will select to Edit code inline. The blueprint of amazon gives a very basic function.

We will change this default code to generate our random number between two given numbers.

In the beginning, just add console.log(‘Loading function’). This will help in debugging the code.’ In the default amazon function, there are some event values that are logged and in the end, it returns the first value in the callback function. Then we will add a handler function to the exports variable. And this function receives 3 variables. I.e. event, context, and callback.

 exports.handler = (event, context, callback) => {
            console.log(‘value 1 =’ , event.key1);
            console.log(‘value 2 =’ , event.key2);
            console.log(‘value 3 =’ , event.key3);
            callback(null, event.key1)
}

The callback is something we will call when our result is ready and we want to send some result back to the user. It takes 2 parameters. 1st one is the error and the second one is the success message. The variables could be string or JSON object.

We will delete all these default codes and write our own code. So, first of all, we will define and set the minimum and maximum number.

 exports.handler = (event, context, callback) => {
           let min = 0;
           let max = 10;
}

Now we will define another variable for the random number.

exports.handler = (event, context, callback) => {
            let min = 0;
            let max = 10;
            let generatedNumber = Math.floor(Math.random() *  max) + min;
}

Mmath.random() generates a random number between 0 and 1. And it’s a floating-point number, so we multiplying it by max and the round it and add the minimum. That gives us a random number between the minimum and maximum number.

Now we are done and want to return the random number. So we will call the callback function.

callback(null, generatedNumber);

 Here there is no error handler implemented so we will just return null in place of the error parameter. And the  generatedNumber.

That’s it, the code part is done.

Now scroll down. And let’s define our handler. The default is index.handler. Index refers to the filename and handler is the name of the variable that is attached to the exports. We will leave this by default.

Now, we will create a new Role and give the role name as ‘basic-lambda-execute-role’. Then under the policy template, we will select ‘Simple Microservice Permissions’.

Next is the advanced settings.

Each lambda function will run in a container & that container will have some memory allocated to it. So here we can pick how much memory should be allocated to our function. Our function is a basic function. So will select 128MB. That is more than enough for our function.

This does not only defines the memory allocated to the function but also the amount of processing power amazon uses to execute our function. If we have a more resource-intensive function then we can increase the memory usage and we will get a faster performing function. Then for the timeout, we will leave it to 3 secs that is enough. If our function does not finish within this timeout then Amazon will return an error message. We will leave the VPC to no VPS and move next. In the next page, amazon lets us review our configuration for our function. And then click the create function. There we will get the msg that our function is created. And we can see the dashboard for our function.

On the dashboard we can see our code, configuration, triggers, and also we can monitor our function as well

Let’s test it by clicking the Test button there. If we scroll down we can find that the function has executed successfully and also we can see the result random number.

So that’s it. We have our random number generator lambda function is running now.

……………………………………………………………………………………………………

The views and opinions expressed in this article are those of the author. To know more about our company, please click on Mindfire Solutions. 

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
Image for Habit

What is a HABIT?

The easy definition would be something that you do daily without being forced or pushed. For e.g. as simple as brushing your teeth. But do you remember how tough it was when you were a small kid and learning to develop this habit, am sure that is not even a matter of thinking today.

Another good and simple example of a HABIT is cycling. Remember when you started cycling/biking the first day! That feeling of imbalance, falling off or getting hit with no hand-eye coordination and then slowly and steadily you become the cyclist in your neighborhood doing all kinds of stunts.

There are many such examples that happen in our daily lives where the beginning looked as difficult as climbing the Everest may be, but as you start taking those steps forward, it becomes simpler and later a maybe even a “cake-walk” to perform those.

……………………………………………………………………………………………………

Why I am talking about HABIT? Because as we grow older and get into our daily chores, we forget to adapt to new habits or develop new changes or even attempt to do something new, simply because we feel that we do not have the time, there’s always too much work in the plate, professional or personal. Whenever you think of doing something new, you always push it to a later date convincing ourselves that we will do it when we have the time for it.

My friend where is that time?…the fact of the matter is NOW is the time.

Remember, everyone has 24hrs in a day and utilizing the same, some became people like the Tendulkars, the Steve Jobs and the Bill Gates to name a few, and many are still searching for that time to begin.

Here I present the new definition of H.A.B.I.T – “[H]aving [A]bility [B]uild [I]ntense [T]ricks” – obviously this is not mine, taken up from the internet, but it very well fits our bill here.

What does it take to build a habit? Answer is “decision” and then taking “action” in the form of small steps daily at the same time every day for the next 21 days (an idea introduced by Dr. Maxwell Maltz ), but I will suggest, if you can do that for 1-Day and then repeat the same for the next 30 days, trust me you will be rolling. But the trick is it has to be continuous, if you break for 1-Day then the cycle has to begin again from Day 1 🙂 that is why its Intense Tricks ;).

So, go out and pick up that Guitar which is hanging in your bedroom and staring at you or start reading that new Tech Area or Buzz Word on which you always wanted to get your hands on and just do it for 1-Day and then repeat the same cycle for next 30-Days.

The exception is, there are still no guarantees of success. It all depends from person to person and on his or her burning desire to make something work. But it is much better than not having tried at all, isn’t it? Roger Bannister was the first man to run a mile in less than 4 minutes. It was his persistence and practice that enabled him to cross what had otherwise seemed like a barrier meant to stay forever until then.

Do put your comments, if you really got into a habit 🙂

……………………………………………………………………………………………………

The views and opinions expressed in this article are those of the author. To know more about our company, please click on Mindfire Solutions. 

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  

How Bitcoin Solves the Double-Spending Problem?

Many of us probably already have heard of Bitcoin. We know what innovation it has brought into this world – the blockchain technology. As of this writing, it has been almost a decade since its inception and it has long thrived without any central control over the network.

Bitcoin, a peer-to-peer electronic cash system, has inspired many other projects and can be seen as a pioneer of the underpinning blockchain technology. That said, it would be worth exploring how Bitcoin solves the double-spending problem. Instead of delving into theoretical exposition we will experience a transaction process in this pioneering innovation. We will be making a transaction on a real network and analyzing what a Bitcoin transaction looks like. A transaction in the Bitcoin network is a bit complex than a conventional digital transaction.

……………………………………………………………………………………………………

Comparison with Fiat Currency Transaction

In a transaction that involves currency notes, we can easily envisage two parties exchanging some goods or services in exchange for the money. One party receives the goods/services and the party pays in currency notes. Let us say the transaction was costing $50 and the payer has $100 currency note. The payer would pay $100 and would receive in change $50 (as shown below). Both the currency notes are legal tender that is supplied by a central bank. A Bitcoin transaction also involves paying and may too involve receiving back change and in this respect is quite similar to our day-to-day transaction.


Comparison with Conventional Digital Transaction

A conventional digital transaction, say the online transfer of money, involves two parties and a mediator (the bank). So, there is a “From address” (A/C No.), a “To address” (A/C No.) and the amount (value transfer). There is no concept of change in a conventional digital transaction. If you want to transfer $1050 then you can transfer the exact amount and this is a mere process of debiting from sender’s account and crediting to the receiver’s account with the help of the mediator who validates the transaction. However, a Bitcoin transaction may involve multiple From addresses and multiple To addresses without any mediator. We will explore how this is possible.

As it is with any traditional transaction, ours will have the following attributes: a “From entity”, a “To entity” and the value to transfer. Let us send an amount of 0.1 BTC to a Bitcoin user as follows:

From: n2FSwa6DsMsbJgNknB64ThR3pHPUQ79bxL
To: msqdPeF7KeEqcWUNAFMm8JQijVB3cnLi4N

Amount: 0.1 BTC

The transaction has been done and the transaction details can be seen here

Bitcoin Transaction

Now, what looks legitimate is the From address (left) and the one of the To addresses with 0.1 BTC sent. However, two things look contradictory. Firstly, we sent 0.1 BTC but it says 1.0 BTC being transferred. Secondly, there is one more To address to which some amount has been sent.

Is something wrong with this transaction? Not really! You can check the above transaction in block explorer and verify yourself that it is indeed the same transaction. But this is the way Bitcoin works. Let us explore.

What is a Bitcoin Transaction made up of?

A transaction in Bitcoin consists of inputs and outputs. The input is like a “From address” which is in Bitcoin terms an unspent transaction output. When you want to make a transaction you will always spend an unspent transaction output as a whole. That said, you end up paying the entire amount. However, you receive the remaining amount in a different address called change address. This change address is your own address where you collect the change, which in turn is an unspent transaction output. This is quite common in Bitcoin transactions. For instance, someone sent you 1.0 BTC. Now you want to send 0.5 BTC to your friend. You cannot break the 1.0 BTC but you will spend the entire 1.0 BTC in a transaction and get the change in your change address.

Transaction Fee

Back to our transaction. Now, let us verify that the amount in input and outputs are balanced. So, 1 BTC – (0.1 BTC + 0.89432145 BTC) should be 0. But it turns out to be 0.00567855. So, where did this amount go?

Well, this is the transaction fee that is paid to the miner who helped you in validating the transaction, adding it into a block, mining the block, and broadcasting to the network. The miner is given this amount as a mining fee for the work he has done.

The mining fee is charged in satoshi per byte. Our transaction has a size of 225 bytes and we were charged 2523.8 satoshi per byte. So, 2523.8 x 225 = 0.00567855 BTC.

The Concept of UTXO

Note that the two outputs here are mentioned as “unspent”. This is how bitcoin keeps track of balances. The sum of all unspent transaction outputs is what constitutes your balance. Bitcoin network does not have any database or global state of balance amount rather it uses the concept of UTXO.

So, how is a UTXO represented in the bitcoin protocol? Perhaps in the bitcoin protocol, there is no concept of “From address”. Yes, there is no concept of From address in Bitcoin. The Bitcoin addresses are used to receive payments. A transaction in Bitcoin never encodes a From address but only has a reference to a previous unspent transaction output. That said, the input of a Bitcoin address is actually a previous unspent output. Bitcoin refers to a previous unspent transaction output using a combination of transaction ID (or transaction hash) and an index. Once an unspent transaction is spent you cannot spend it again and thus prevent the double spending.

A Transaction with Multiple Inputs

So, how will a Bitcoin user transact an amount for which he has no unspent transaction output equal to or above that value? Say, a user wants to transfer 5 bitcoins but none of his unspent transaction output has that much amount although the user has multiple unspent transactions that add up to a value greater than 5.  Bitcoin allows you to combine unspent transaction outputs. A transaction with multiple inputs would sound new to a person doing a conventional digital transaction. This is because a conventional digital transaction always has only one sender (or From address). Let us analyze a bitcoin transaction with multiple inputs. In this case, the Bitcoin user wants to send 1.02 bitcoins but he has no unspent transaction outputs of that value. So, the user combines two inputs and then transacts (see below).

Bitcoin Transaction

The above transaction (ac194c19201a20cdd26bbb8d696588370c06261148fd20a96b3330b0bcb03207 ) has two inputs and two outputs and it is absolutely a valid transaction in Bitcoin. The total amount of BTC of these two inputs, which is 1.04997424 BTC, is sufficient to send a transaction of 1.02 BTC. And the remaining value has been collected in a change address with 0.02997013 BTC.

……………………………………………………………………………………………………

How are transactions validated in Bitcoin?

Let us take an example of a transaction that involves one input and one output (as below). Here,  the input is a reference to a previous unspent transaction at index 0. The previous transaction is referred by a transaction hash: f5d8ee39a430901c91a5917b9f2dc19d6d1a0e9cea205b009ca73dd04470b9a6
The output sends 50 bitcoins to a bitcoin address. When the recipient wants to spend this 50 bitcoin he will reference output 0 of this transaction as an input of his own transaction.

Input:

Previous tx: f5d8ee39a430901c91a5917b9f2dc19d6d1a0e9cea205b009ca73dd04470b9a6
Index: 0
scriptSig: 304502206e21798a42fae0e854281abd38bacd1aeed3ee3738d9e1446618c4571d10

90db022100e2ac980643b0b82c0e88ffdfec6b64e3e6ba35e7ba5fdd7d5d6cc8d25c6b241501

Output:

Value: 5000000000

scriptPubKey: OP_DUP OP_HASH160 404371705fa9bd789a2fcd52d2c580b65d35549d

OP_EQUALVERIFY OP_CHECKSIG

Bitcoin uses a scripting system to verify a transaction. There are two script components that can be seen in the above transaction: scriptPubKey and scriptSig. So, the scriptSig refers to the sender’s signature and the public key. The scriptPubKey is the script that will be evaluated using bitcoin protocol and if the execution of the script returns true then the transaction is valid.

scriptSig: <sig> <pubKey>
scriptPubKey: OP_DUP OP_HASH160 <pubKeyHash> OP_EQUALVERIFY OP_CHECKSIG

Let us see how this script is executed on the stack:

Step 1: Combine scriptSig and scriptPubKey in that order
Step 2: Push <sig> and <pubKey> to stack
Step 3: Execute the operation OP_DUP which will duplicate top item, which is <pubKey>
Step 4: Execute the operation OP_HASH160 which will create hash of the <pubKey> and this hash will be pushed to the stack.
Step 5: Execute the operation OP_EQUALVERIFY to ensure the hash generated matches with the <pubKeyHash>
Step 6: Execute the operation OP_CHECKSIG for the two hashes on the stack.

In summary, a Bitcoin transaction involves one or more inputs and one or more outputs, has no concept of From addresses in its protocol, uses a concept of unspent transaction output, and verifies the transaction using a scripting architecture.

……………………………………………………………………………………………………

The views and opinions expressed in this article are those of the author. To know more about our company, please click on Mindfire Solutions. 

Spread the love
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •