The financial industry is going through a huge change in 2020. Financial services firms are in a unique position to modernize and improve business operations. How do you pinpoint the best and safest roads to prosperity, business continuity, and agility? This may seem like a tough nut to crack. That is where public cloud adoption enters center stage. Making the switch to new and unfamiliar tech is an understandable concern, given the critical factors such as rising cost pressures, managing complex business data and workloads, training existing IT staff, and so on. These concerns can be mitigated when turning towards expert IT services that hinge on public cloud power. There is a huge seismic shift working in favor of public cloud. Market stats show growing tendencies towards cloud adoption. According to a recent forecast by Gartner, in the current climate, by 2022 a cloud shift across key enterprise IT markets will surge to 28% (as opposed to 18% in 2018). Bigger financial market players are making the crucial leap to public cloud. BI reports that PayPal is looking to handle the bulk of its transactions via Google's public cloud. Goldman Sachs revealed its intentions to migrate their Marquee app to Amazon's cloud, in an effort to attract fintech developers. Meanwhile, JPMorgan is creating a cloud engineering hub in Seattle, which is minutes away from Amazon and Microsoft. As you can see the big players are already hopping on the bandwagon. Let us have a quick gander at some of the trends in financial services:
  1. Embracing Public Cloud for Business Operations
  2. Security and Compliance
  3. Hybrid Cloud
  4. Application Hosting
  5. AI and Machine Learning in Public Cloud
  6. Cloud Solutions Using OPEX Model

Embracing Public Cloud for Business Operations

Banks, hedge funds, and diverse financial services firms are still having trouble planning the road ahead. Banks, for instance, see the cloud space as the perfect launching point towards digital transformation and business resilience. Initially, many firms have issues regarding security, although they soon realize the well-known truth - public cloud providers have tremendous spending power to maintain their cloud environments. The public cloud environment and infrastructure feature numerous advantages and that's why they are seen as one of the hottest trends in financial services. Feel free to check out the biggest 11 benefits of public cloud. Knowing these advantages could help you crystalize exactly what is best for improving and modernizing business operations.

Security and Compliance

Many companies have learned valuable lessons and are taking extra caution every step of the way. The priorities are to improve data management, increase computing power, and storage capacity. This is often a dynamic and ever-changing beast. The need for safety and security has risen dramatically. Adoption of public cloud advisory services and public cloud provider tech, easily mitigates any potential soft spots in security, minimizing chances of cyberattacks. For example, every application that is deployed is always monitored, thereby regularly checked by expert in-house IT teams or your trusted outsourced IT partner. As a result, data is controlled within the safety of the public cloud. Such a controlled environment allows for increased customization to ensure smooth cloud deployments. Also, given the variety of compliance and regulations standards within the financial services industry, companies are meticulously focusing on Service Level Agreement (SLA). Reviewing if the services provider can align those properly with company business needs. Encrypting Data is equally important. Encrypting sensitive data is crucial when meeting compliance standards.

Hybrid Cloud

In addition, numerous companies are exploring other options to increase flexibility. They are looking to adapt their IT infrastructures to combine cloud services utilizing hybrid cloud. This represents a mix of all the benefits of public cloud, in addition to private cloud (or on-prem) services that might suit certain businesses. Because of this, Microsoft and Amazon are now hitting the market with their own hybrid solutions. These may solve the problem for companies that are looking to reap the massive benefits of public cloud but are afraid of certain disadvantages of on-prem.

Application Hosting

In the financial world, timing is everything. When financial firms need to know that their applications can run smoothly and effectively to meet everyday goals. However, when the public cloud is concerned, moving your legacy applications to the public cloud can be quite a challenge. Public cloud environments are often complex, so it is a good move to turn to a public cloud MSP with lots of experience in the financial industry.

AI and Machine Learning in Public Cloud

Machine Learning (ML) algorithms are popularly used in the tech industry and are increasingly becoming valuable for large-scale processes. Machine learning is a complex process, but it has a very simple and practical purpose – to learn, adapt, and get better. It is software or tech that is constantly self-improving. In a nutshell, ML automates intelligent decision making. Machine learning can be used for factor selection (Quantitative Finance). ML also applies other practical uses such as tracking transactions, zeroing in on suspicious accounts and activities within the cloud, and therefore improving security and more. Modern-day public cloud providers have an amazing variety of powerful AI-based tooling to improve enhance security and improve trading strategies. Banks, for example, are progressively utilizing AI and ML to process automation for trade finance, smart contracts, foreign payments, and so on.

Cloud Solutions Using OPEX Model

In the business world, OPEX is an abbreviation used to describe “operating costs” or “operational expenditure,” or more precisely expenses that come with running an everyday business. This can include anything from services to customer/client care, or any consumable resources that are paid for regularly. In short, it is the pay-as-you-go model. CAPEX, on the other hand, denotes long-standing investments and long-term commitments for equipment, resources, capacity, and so on. The trend in financial services is increasingly using managed public cloud services, and therefore switching to OPEX. The result is: driving innovation and delivering a better customer experience. Why is that? The simple answer is OPEX is a more cost-effective and more flexible operating model. Running and maintaining the cloud is left to cloud experts. In that scenario, the financial services firm’s existing staff focuses on the regular duties necessary to run the business smoothly. Financial forecasts are kept stable and predictable while hiring new staff or additional training of regular staff is no longer necessary.

Date/Time

Date(s) - 01/01/1970
12:00 AM - 12:00 AM

Location

600 5th ave. NY, NY
[et_pb_section fb_built="1" admin_label="The Challenge" _builder_version="4.1" collapsed="on"][et_pb_row _builder_version="3.25" background_size="initial" background_position="top_left" background_repeat="repeat"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="The Challenge" _builder_version="4.1"]

  The Challenge

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure="3_5,2_5" _builder_version="4.1" custom_margin="0px||" custom_padding="7px||7px"][et_pb_column type="3_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1"]

Hentsu has taken the time to carefully construct this Azure Data Factory case study, to highlight the benefits of both the cloud and MS ADF. A client recently approached us with a data science challenge regarding one of their data sets. The data was provided to the client in an AWS environment in a Redshift data warehouse. While this was fast they found it to be very expensive, in AWS the data and compute costs are coupled together. As such, a large data set necessitates a high spend on computing costs, even if this level of speed is not necessary for their analysts.

However, the data was also available in CSV format in an S3 storage bucket, which could be the starting point of a new approach. The client already had all their infrastructure deployed and managed by Hentsū in Azure, so they wanted to consolidate into the existing infrastructure.

After reviewing the challenges, we were able to create an elegant solution leveraging the huge power and scale of the cloud, which is simply not possible in traditional infrastructure.

[/et_pb_text][/et_pb_column][et_pb_column type="2_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1" background_color="#ffbb22" custom_margin="0px|||0px|false|false" custom_padding="20px|20px|15px|20px|false|false"]

REQUIREMENTS

  • Process 11,000 files & total compressed size of ~2TB
  • Ingested into a database
  • Keep raw files
  • Parallel and rate controlled
  • Account for every file
  • Ongoing low effort maintenance, cost-efficient and automated
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Key Considerations" _builder_version="4.1" background_color="#333333" collapsed="on"][et_pb_row _builder_version="4.1" width="90%" custom_padding="20px|20px|0px|20px|false|false"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text admin_label="Key Considerations" _builder_version="4.1"]

document icon Key Considerations

  • The solution had to be able to process this large data set consisting of over 11,000 files and a total compressed size of ~2TB, with additional files every day.
  • Raw files had to be stored for any future needs, whilst also being ingested into a database.
  • The ingestion should both be parallelisable and rate controlled, to ensure we manage the number of database connections and have orderly ingestion.
  • Not only was this to be a one-time load of historical data, but new files created needed downloading and ingesting in an automated fashion.
  • Every file had to be accounted for to ensure that all the data is moved correctly, so keeping track of each file's status was important. Things happen; connections break, processes stop working, so we must have a system in place when these do occur.
  • Keep ongoing maintenance low effort, cost-efficient and automated, and delegate as much of the maintenance away from end-users.
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="The Solution" _builder_version="4.1" custom_margin="||0px||false|false" custom_padding="||0px||false|false" collapsed="on"][et_pb_row _builder_version="4.1" background_size="initial" background_position="top_left" background_repeat="repeat" custom_margin="20px||0px||false|false" custom_padding="||0px||false|false"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="The Solution" _builder_version="4.1" custom_margin="||0px||false|false" custom_padding="20px||0px||false|false"]

solution icon  The Solution

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure="3_5,2_5" _builder_version="4.1" custom_margin="0px||" custom_padding="7px||7px"][et_pb_column type="3_5" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1"]

Hentsū  recommended a solution built on Azure Data Factory (ADF), Microsoft's Extract-Transform-Load (ETL) solution for Azure. While there are many ETL solutions that can run on any infrastructure, this is very much a native Azure service. It also easily ties into the other services Microsoft offers.

The key functionality is the ability to define the pipelines to move the data in a web user interface, set the schedules which can either be event based (such as a creation of a new file) or on a time schedule. After that, Azure handles the execution of the pipelines to process the data. The pipeline creation requires relatively little coding experience. In other words, makes it easy to delegate this to staff with little technical experience.

 

[/et_pb_text][/et_pb_column][et_pb_column type="2_5" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1" background_color="#ffbb22" custom_padding="20px|20px|10px|20px|false|false"]

TECHNOLOGIES USED

  • Azure Data Factory (ADF) to provide the ETL logic and processing of files
  • Azure SQL Data Warehouse for the end storage of data to be consumed by analysts
  • Azure Data Lake for the long term storage of raw files
  • Azure Data Factory
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Technical Details" _builder_version="4.1" background_color="#333333" custom_padding="0px||0px" collapsed="off"][et_pb_row _builder_version="4.1" background_size="initial" background_position="top_left" background_repeat="repeat" width="90%" custom_margin="||||false|false" custom_padding="30px|20px|10px|20px|false|false"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="Technical Details" _builder_version="4.1"]

solution icon  Technical Details

In this particular azure data factory case study, Hentsū built out the data pipelines to move the data from AWS into Azure. The initial load was triggered manually, but then the update schedules were set to check for new files at regular intervals.

Hentsū created status tables to keep track of each file. This allows us to keep track of the state of the data as it passes through the pipelines and use a decoupled structure so that any troubleshooting or manual intervention can happen at any stage of the process without creating dependencies. The decoupled structure meant that individual files and steps can be fixed in isolation. Following that, the rest of the pipelines and steps continue uninterrupted. The clean decoupling means any errors on a particular step were easily identified and notified to users for investigation.

All the data was then mapped back to these tables, to be used if we ever needed to do further processing or cleaning on the final tables. The data was further transformed with additional schema changes to match the client's end use and to map it to the traditional trading data.

The pipelines were deliberately abstracted to allow for the least amount of work to add new data sources in the future. The goal was to make it easy for the client's end users to do themselves as and when required.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Benefits " _builder_version="4.1"][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text admin_label="Benefits & Caveats" _builder_version="4.1" custom_padding="50px||||false|false"]

 The Benefits of Azure Data Factory

ADF can run completely within Azure as a native serverless solution. This means there is no need to worry about where the pipelines are run, what instance types to choose upfront, manage any servers/operating systems, configure networking, and so on. The definitions and schedules are simply set up and then the execution is handled.

Running as a serverless solution means true "utility computing", which is the entire premise of cloud platforms such as Azure, AWS, and Google. The client only pays for what is used, there are no times with idle servers costing money without producing anything, and it can scale up as needed.

ADF also allows the use of parallelism while keeping your costs to only what is used. This scaling up was a huge benefit of ADF for the client and when time is of the essence; one server for 100 hours or 100 servers for one hour cost the same, but the work is done in 1/100th of the time. Hentsū tuned the solution so the speed of the initial load was only restricted by the power of the database, allowing the client to balance the trade-off between speed and cost.

ADF has some programming functionality, such as loops, waits, and parameters for the whole pipeline. Although there is not as much flexibility as a full language (Python for example) it allowed Hentsū significant flexibility to design the workflows.

 

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Caveats" _builder_version="4.1"][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1" hover_enabled="0"]

DOWNLOAD THIS CASE STUDY

[/et_pb_text][et_pb_text _builder_version="4.1" custom_padding="20px||||false|false"]

  Caveats

There are limited sources and sinks (i.e. inputs and outputs). The full list is available in the Microsoft documentation. Microsoft's goal with ADF is to get data into Azure products, so if one needs to move data into another cloud provider a different solution is needed.

The pipelines are written in their own proprietary "language." This means the pipelines code does not integrate well with anything else, which would not be the case if they were written in a language like Python, as many other ETL tools will provide. This is also the key reason we have developed our own ETL platform for more complex solutions which uses Docker and more portable Python code.

There were some usability issues when creating the pipelines, with confusing UI or vague errors on occasion; however, these were not showstoppers. Our advice when using the ADF UI is to make small changes and save often. We can see that Microsoft is already aggressively addressing some of the issues we encountered.

 

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Impact" _builder_version="4.1" background_color="#333333" custom_margin="||30px||false|false" custom_padding="||20px||false|false"][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1" custom_margin="||0px||false|false" custom_padding="20px||30px||false|false"]

impact icon  Impact

The client was very pleased with the ADF and Azure SQL Data Warehouse solution. This Azure Data Factory case study brought an elegant solution. The solution automatically scales the compute power to process the data as it changes week by week. It also scales up when there is more data, and scales down with less data. Overall, the solution costs a fraction of what it did previously whilst keeping it all within the client's Azure environment.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="4.1" collapsed="on"][et_pb_row _builder_version="3.25"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_cta title="Reach Out To Find Out How We Can Support Your Data Science Needs" button_url="https://hentsuprod.wpengine.com/contact" button_text="Contact Us" _builder_version="3.17.6"] [/et_pb_cta][/et_pb_column][/et_pb_row][/et_pb_section]

Date/Time

Date(s) - 01/01/1970
12:00 AM - 12:00 AM

Location

600 5th ave. NY, NY
[et_pb_section fb_built="1" _builder_version="3.22" custom_padding="0px||0px"][et_pb_row _builder_version="3.25"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="3.27.4"]

Microsoft recently had a flurry of announcements about Office 365 and especially Microsoft Teams. Below, we highlight  some of the key changes important to the asset management space. 

Microsoft: Now Available 

Outlook on the web - Conditional Access 

Office 365 can now set up policies that block users from downloading files from Outlook on the web to non-compliant devices. This helps provide more flexibility on the go, but still retains a good degree of security around your company files. 

Azure AD Password Protection 

Azure AD Password Protection helps you eliminate easily guessed passwords from your environment, which can dramatically lower the risk of being compromised by a password spray attack. Specifically, these features let you:  

  • Protect accounts in Azure AD and Windows Server Active Directory by preventing users from using passwords from a list of more than 500 of the most commonly used passwords, plus over 1-million character substitution variations of those passwords.  
  • Manage Azure AD Password Protection for Azure AD and on-premises Windows Server Active Directory from a unified admin console. 

Update to Exchange Mailbox Auditing – Mailboxes Audited by Default and New Mailbox Actions to Audit 

To ensure clients have access to critical audit data to investigate security or regulatory incidents in their tenancy when required, the Exchange Online service introduces a configuration that automatically enables mailbox auditing on all applicable mailboxes to users of the Commercial service. With this update, it is no longer required to configure the per-mailbox audit setting for the service to begin storing security audit data. These actions are of high interest to understand the activities that are taking place within the tenant. 

Combined Password Reset & MFA Registration 

Microsoft released a preview of a new user experience that allows users to register security info for multi-factor authentication (MFA) and password reset in a single experience. Now when a user registers security info such as their phone number for receiving verification codes, that number can also be used for resetting a password. Likewise, users can change or delete their security info from a single page, making it easier to keep information up-to-date. 

Outlook Calendar: Option to Block Forwarding of Meeting Invites 

Meeting organizers have the option to prevent attendees from forwarding a meeting invitation. This option is available only for users in Office 365. In the first release, the option to prevent forwarding is available when creating or editing meetings in Outlook on the web, but the option will become available in Outlook for Windows shortly after. 

In Development: To Keep an Eye On 

Admin tool: TeamSite Auto-Mount 

Admins can specify TeamSite Libraries that they want their users to automatically sync with OneDrive for Business. 

Passwordless Sign-in for Work Accounts 

Microsoft Authenticator mobile app now supports sign-in with your face/fingerprint or device PIN to your work accounts. You can take out the security risk of passwords and have the convenience of using a device you already own and carry with you. This option can be configured by administrators in the Azure Active Directory. 

For more Information on the latest Microsoft updates check out the roadmap here.

 

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version="3.25"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_cta title="Contact Us" button_url="https://hentsuprod.wpengine.com/contact" button_text="Click Here" _builder_version="3.17.6"]

To learn more about how we can support you with these updates and more, contact us today. 

[/et_pb_cta][/et_pb_column][/et_pb_row][/et_pb_section]

Date/Time

Date(s) - 01/01/1970
12:00 AM - 12:00 AM

Location

600 5th ave. NY, NY
[et_pb_section fb_built="1" _builder_version="3.22"][et_pb_row _builder_version="4.1" background_size="initial" background_position="top_left" background_repeat="repeat"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1" background_size="initial" background_position="top_left" background_repeat="repeat"]

The public cloud market has grown and changed over the years. With each step of the way, Hentsū continues to accumulate experience and knowledge on how to adapt and stay agile.

In this interview, we cover more than just the public cloud and how it's shaping the current market, but also cloud trends, the progress of serverless computing, growth of data, an increase of quant workloads, and more. Companies are fighting to stay afloat by disregarding traditional services such as delivering servers and infrastructure. This makes room for new solutions, fresh tech, more agile services to deliver business value directly to our clients.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure="1_2,1_2" _builder_version="4.1"][et_pb_column type="1_2" _builder_version="4.1"][et_pb_image src="https://3bb4f13skpx244ooia2hci0q-wpengine.netdna-ssl.com/wp-content/uploads/2020/11/cloud-drivers-01.jpg" show_in_lightbox="on" _builder_version="4.1"][/et_pb_image][/et_pb_column][et_pb_column type="1_2" _builder_version="4.1"][et_pb_text _builder_version="4.1"]

With a clear goal and a solid strategy, companies get closer and closer to their business targets. All of this is enabled quickly thanks to the amazing potential of public cloud and the tooling that comes with it.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1" custom_margin="50px||||false|false"]

What Are the Advantages of Public Cloud?

[/et_pb_text][et_pb_text _builder_version="4.1"]Businesses can utilize different software as a service (SaaS) propositions. They can also rely on scalability, flexibility, and better ROI. Of course, the idea behind public cloud enablement is not just to have a powerful infrastructure at your fingertips. Primarily, it is about modernizing your business and making it more resilient.
With public cloud computing, you consume as you need, opposed to buying upfront. Marko Djukic, Hentsū
That's not all. The key focus here is that everything is powered by code and everything is driven by code. Bearing that in mind, Hentsū always had the ability to adapt and stay focused in today's business world.[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1" hover_enabled="0"]

Marko Djukic, CEO, and founder of Hentsū, reflects on the advantages of the public cloud in asset management, the enhanced security it delivers, and the evolution of data science it enables. Read the interview to learn more: The Power of the Public Cloud

 

Download the PDF HERE

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

Date/Time

Date(s) - 01/01/1970
12:00 AM - 12:00 AM

Location

600 5th ave. NY, NY
[et_pb_section fb_built="1" admin_label="Challenge" _builder_version="3.22" collapsed="off"][et_pb_row column_structure="3_5,2_5" _builder_version="4.1"][et_pb_column type="3_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="Challenge" _builder_version="4.1"]

 Challenge

In this case study we shall examine the uses and advantages of Docker architecture and the benefits of a Kubernetes cluster.

One of our existing clients had been using their own machine learning strategies to develop an in-house platform in order to produce trading signals from a range of alternative datasets. The 4-person development team had been running for six months, working on building a suite of Python applications and Big Data processing pipelines, both on premise and in Amazon AWS cloud.

    [/et_pb_text][/et_pb_column][et_pb_column type="2_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1" background_color="#ffbb22" custom_padding="10px|10px|10px|10px|false|false"]

    INITIAL COMPONENTS

    • 4-person in-house development team
    • Alternative data sets
    • In-house VMware and AWS cloud
    • Proprietary Python and TensorFlow code
    • MongoDB for data and results
    [/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1"]

    The client approached Hentsū to extend their own small development team, and to improve the overall software development. The pace of functionality releases was slow, the applications were suffering from complexity and the code quality was poor.

    The in-house developers were experiencing significant struggles to work as a unified team. Code was being committed and deployed with broken library dependencies, resulting in manual fixes every release to ensure code was running correctly. The applications were disjointed and inconsistent, with very loosely coupled sets of scripts, software, and services. There was no robust deployment of the applications, and once deployed there was often the need to intervene manually.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="4.1" background_color="#333333"][et_pb_row _builder_version="4.1" width="90%"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1" ol_font="|600|||||||" ol_text_color="#ffbb22" custom_padding="20px||||false|false"]

    document icon Key Requests

    1. Deliver more functionality, faster
    2. Reduce code bugs and improve stability of the application
    3. Deploy the application faster to any environment
    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Solution" _builder_version="4.1" collapsed="off"][et_pb_row column_structure="2_5,3_5" _builder_version="4.1" custom_margin="30px||||false|false" collapsed="off"][et_pb_column type="2_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1" background_color="#ffbb22" custom_padding="10px|10px|10px|10px|true|true"]

    Technologies Used

    • Atlassian JIRA – roadmap, sprints and issue tracking
    • Atlassian Confluence – documentation
    • Atlassian BitBucket – source code
    • Atlassian Pipelines – build, testing and deployment
    • Amazon AWS – Elastic Container Registry (ECR) and environments
    [/et_pb_text][/et_pb_column][et_pb_column type="3_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="Solution 1" _builder_version="4.1"]

     Solution

    DevOps Workflow

    Hentsū promptly identified the need to deploy the most efficient Continuous Integration/Continuous Deployment (CI/CD) pipeline, as fast as possible. The focus had to be on feature development rather than tooling, as well as on the removal of any difficulties in getting great code from the developers - quickly.

    As a first phase, Hentsū deployed a development workflow, which was based around the Atlassian suite of products. The goal was to enable rapid iteration of the team’s code, whilst ensuring overall software testing, quality control and integration. The workflow relied on properly defined environments – Development, Testing, Acceptance, Production (DTAP).

     

    [/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_image src="https://hentsuprod.wpengine.com/wp-content/uploads/2018/06/DevOps-Workflow.jpg" show_in_lightbox="on" align_tablet="center" align_phone="" align_last_edited="on|desktop" admin_label="DevOps Workflow" _builder_version="3.23"][/et_pb_image][et_pb_text _builder_version="4.1"]

    Deploying these steps and enforcing the code flow produced an instant improvement in both the collaboration across the team, and the quality of software. There was much improved visibility on what any one developer was pushing into the branches and its effect on the overall software.

    Separately, Hentsū worked with the developers to restructure the Git software repositories logically into specific areas of concern (apps/services/dependencies). Each repository would contain its own tests, dependency tree and Bitbucket pipeline YAML config. This enabled more autonomy in development, whilst retaining efficient control over the cross-platform dependencies and testing.

    Finally, the agile methodology was improved through the use of clearer structure and scheduling. Ensuring better code quality and higher feature throughput was key. So, there was a focus on activities such as sprint start, standups, development time, smoke tests, and backlog refinement. Product ownership and feedback were improved within the business. This was accomplished by clearly identifying each feature owner and involving them in the sprint process.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="4.1" background_enable_color="off" collapsed="off"][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1" hover_enabled="0" locked="off"]

    DOWNLOAD THIS CASE STUDY

    [/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure="3_5,2_5" _builder_version="4.1" width="100%" custom_margin="||0px||false|false" custom_padding="10px||0px||false|false" collapsed="off"][et_pb_column type="3_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="Solution 2" _builder_version="4.1"]

    Docker Architecture

    Hentsū deployed a 3-person team to augment the in-house developers. The team brought Python and containerisation expertise to re-architect the applications and make them more stable, self-contained, and easily distributed across environments. Whilst the Git repositories were restructured, the corresponding Docker images were rolled out for each specific service.

    Docker registry and Elasticsearch services from AWS were used to help with the deployments and monitoring, without having to stand up infrastructure. To help with the deployment, scaling and management of the Docker containers, a Hentsū customised Kubernetes platform was rolled out. The customisation also allowed the client to overcome the limitations in the AWS EKS service and integrate VMware environments. This ensured consistency of deployment and tooling, but also allowed for the applications to be deployed to Azure and Google Cloud Platform (GCP).

    If you wish to learn more about Docker, check out our blog posts:

    [/et_pb_text][/et_pb_column][et_pb_column type="2_5" module_class="ds-vertical-align" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text _builder_version="4.1" background_color="#ffbb22" custom_padding="10px|10px|10px|10px|false|false"]

    Technologies Used

    • Docker – containerisation
    • Angular – front end interfaces
    • Kubernetes – cluster management
    • Helm – package management
    • Kubespray and Ansible – Kubernetes deployment
    • AWS Elasticsearch, Prometheus and Grafana – application and system monitoring
    [/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure="1_2,1_2" _builder_version="4.1"][et_pb_column type="1_2" _builder_version="4.1"][et_pb_image src="https://hentsuprod.wpengine.com/wp-content/uploads/2018/06/Kubernetes-Cluster.png" show_in_lightbox="on" align_tablet="center" align_phone="" align_last_edited="on|desktop" admin_label="Kubernetes Cluster" _builder_version="3.23"][/et_pb_image][/et_pb_column][et_pb_column type="1_2" _builder_version="4.1"][et_pb_image src="https://hentsuprod.wpengine.com/wp-content/uploads/2018/06/Security-Kubernetes-Cluster.png" show_in_lightbox="on" align_tablet="center" align_phone="" align_last_edited="on|desktop" admin_label="Security Kubernetes" _builder_version="3.23"][/et_pb_image][/et_pb_column][/et_pb_row][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1"]

    Kubernetes Cluster

    Using a Kubernetes cluster, Hentsū enabled the additional functionality of automatic scaling. Worker nodes were able to run as a static number, which could be useful on-premise to limit the impact on other resources. However, as the Python code had the capability to work in parallel, deploying autoscaling allowed the number of nodes to ramp up quickly based on the queue of work. If there was a bigger queue of incoming data to process, the entire cluster could autoscale to thousands of nodes if needed. Each individual worker node was a small enough unit of compute/memory that the autoscaling for different loads of work became very linear and cost efficient.

    Combining the Hentsū Kubernetes cluster management and AWS meant that the client had many more options to manage the workloads. The cluster could rapidly adapt between specific GPU enabled worker instances, whilst simultaneously the client was able to use the AWS Spot market for cheaper resources when available and move the application between regions or even cloud providers. Another new possibility this opened up was deploying to bare metal, allowing for VMware to be discarded.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="4.1" collapsed="on"][et_pb_row _builder_version="3.25" background_size="initial" background_position="top_left" background_repeat="repeat"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="Solutions 3" _builder_version="4.1"]

    Security Considerations

    With the ability to run the Python code in various cloud platforms, and potentially also utilise Platform as a Service (PaaS) from the cloud providers, the security of the intellectual property was of concern. Hentsū deployed the entire solution with strict adherence to its own developed ISO 27001 cloud security checklist. Encryption was built into the application from the start, and all user access controls were tied back to the client’s corporate Active Directory.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="Impact" _builder_version="4.1" background_color="#333333" custom_margin="||30px||false|false" custom_padding="||||false|false" collapsed="on"][et_pb_row _builder_version="4.1" width="90%"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text admin_label="Impact" _builder_version="4.1" custom_padding="10px|10px|10px|10px|false|false"]

     Impact

    The improvements and options Hentsū enabled meant developers were happier and substantially more productive with their coding. We've employed both Docker architecture and Kubernetes cluster successfully. Additionally, the team collaboration and the engagement of the business stakeholders meant that more features than initially planned were released to the end users, and in a faster timeframe.

    The number of bugs which were raised in production from each two-week release cycle was reduced from an average of over 30, to below 2. This code quality success was ensured by the improvements Hentsū implemented to the scheduling and structure, such as the Pipeline unit tests, the consistency of the development and acceptance environments, and the rigorous smoke tests.

    The greatest impact was the overall delivery of the project. When Hentsū was first engaged the estimate for remaining time to deliver the project was 18-24 months; however with the changes delivered by Hentsū the project was completed in under 6 months.

    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" admin_label="CTA" _builder_version="4.1" collapsed="off"][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_cta title="Talk to us about your cloud workloads" button_url="https://hentsuprod.wpengine.com/contact" url_new_window="on" button_text="Contact Us Today" _builder_version="3.16" button_text_size__hover_enabled="off" button_one_text_size__hover_enabled="off" button_two_text_size__hover_enabled="off" button_text_color__hover_enabled="off" button_one_text_color__hover_enabled="off" button_two_text_color__hover_enabled="off" button_border_width__hover_enabled="off" button_one_border_width__hover_enabled="off" button_two_border_width__hover_enabled="off" button_border_color__hover_enabled="off" button_one_border_color__hover_enabled="off" button_two_border_color__hover_enabled="off" button_border_radius__hover_enabled="off" button_one_border_radius__hover_enabled="off" button_two_border_radius__hover_enabled="off" button_letter_spacing__hover_enabled="off" button_one_letter_spacing__hover_enabled="off" button_two_letter_spacing__hover_enabled="off" button_bg_color__hover_enabled="off" button_one_bg_color__hover_enabled="off" button_two_bg_color__hover_enabled="off"]See how Hentsū can enable your data science workloads across multiple clouds, using DevOps techniques and Docker containerisation. [/et_pb_cta][/et_pb_column][/et_pb_row][/et_pb_section]

    Date/Time

    Date(s) - 01/01/1970
    12:00 AM - 12:00 AM

    Location

    600 5th ave. NY, NY