We are currently seeing a major change in the IT industry, and the cloud environment has turned into a safe haven. With a variety of major businesses switching to the public cloud, we have also seen a huge drive towards scaling, and automation. Generally, both the technology and financial sectors are facing a huge shift from having complex computing tasks done manually towards more streamlined and automated processes. That's where grid computing comes in.
When going for cloud adoption, “businesses are pushing PaaS first and that has a lot of positive effects. To begin with, things are made easier right off the bat, because essentially all the building blocks are there, so any initial business can run workloads and get going within a day or two, rather than wasting too much time to set up the foundation,” stated Hentsu CEO, Marko Djukic.
An increasing amount of businesses are implementing grid computing clusters to handle massive workloads.
Grid Computing refers to making use of the shared power of a cluster of computers to process computationally intensive tasks that would otherwise bog down a single workstation. To put it simply, jobs can be submitted from one computer to the grid, which then processes the data and returns the output to the user. Importantly, multiple people can simultaneously make use of the same grid by intelligently managing how the computers in the grid allocate resources. This allows for significantly improved workflow for companies whose work is optimized for grid computing.
Opening with a much needed refresher on public and private cloud, as well as defining key terminology, the talk flowed into an investigation of the challenges companies face when working with different data sets and a look at solutions currently available to companies.
To recap, grid computing denotes one main computer that distributes information and tasks across multiple networked computers – all of that usually towards one objective. Let us simplify a bit and illustrate how things operate. Grid computing network frequently has three types of machines:
One of the key aspects of grid computing is flexibility, and more importantly, computing power. In other words, it boils down to having a single computer grid for large amounts of data rather than placing the demand on a single supercomputer. You can also find out how Grid Computing yields faster results for your business.
But for now, let's focus on the most frequently asked questions related to grid computing.
Why grid is computing important?
What are the real benefits and biggest advantages of grid computing?
Well, the reasons why so many businesses rely on this particular method of completing joint tasks because it denotes following key advantages:
It has to be highlighted that ephemeral computing has also been a huge part of the innovation process in modern-day tech. It carries tremendous advantages, albeit we have to ask the simple and most obvious question here: what does that mean for businesses and the SaaS industry in particular?
When you describe a process as “ephemeral” it denotes something temporary and brief. Essentially, the notion of dealing with a surplus of servers or indeed a shortage of servers is something that’s not an issue with ephemeral computing. To put it into perspective, ephemeral computing services are agile and will adjust according to the problems and needs at hand.
Utilizing ephemeral clusters that scale up and down as needed is quite a boost to handling workloads in general. In short, it means you limit or eliminate convoluted pre-planning and relying on heavy server power.
“It’s basically all about serverless. Code that is distributed across ephemeral compute that handles the analysis and then churns out the answers without having to deal with what’s actually the underlying compute,” says Marko Djukic.
He added: “Compute is just a utility you consume as needed, nothing exists permanently."
To summarize, in PaaS and SaaS scenarios, giving your business operations and workloads the power to shrink automatically, can completely remove any scalability issues you may be experiencing with traditional server-heavy computing methodology.
There is a breadth of choices for grid computing and how to migrate workloads into cloud environments. We covered some of those, looking at traditional MATLAB setups to more extreme Platform as a Service (PaaS) environments from Google using their Bigquery and Datalab, and ran some live demos ripping though 2TB of full depth market data.
We've built up a wealth of in-house expertise running grid computing workloads across all three major public clouds - Amazon AWS, Microsoft Azure and Google Compute Engine. We can get you up and running quickly with pre-tested designs and architectures, greatly eroding the overall traditional pains and TCO for running grid computing.[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version="4.1"][et_pb_column type="4_4" _builder_version="4.1"][et_pb_text _builder_version="4.1"]
Get in touch: email@example.com, we'd love to hear about your grid computing challenges.[/et_pb_text][et_pb_video src="https://www.youtube.com/watch?v=zNDkFyVxYc4" _builder_version="4.1"][/et_pb_video][et_pb_text _builder_version="4.1" header_font="|||||||#000000|" hover_enabled="0"]
We will use Flow to generate some JSON for the organizational hierarchy, which can then be used in various org charts around the business. There is a specific D3.js library, which will be used to display it all, but for now we going to cover the overall structure to generate it all properly.
For transparency, using Flow is probably not the best way to do this, just given the number of workarounds to achieve what should be basic programming techniques. This org chart approach is pushing the limits of what we can do with Flow just to see how far we can get with its functionality. Its huge advantage is that it is available to anyone using Microsoft 365, so even though it may not be an elegant solution at times, it is hugely powerful by putting this kind of automation in the hands of every user.
Each HTTP call has a performance hit, which in a recursive traversal will add up. This is not the most efficient way to traverse a very large organization hierarchy in Microsoft 365. So, a department of about 30 individuals will take roughly 50 seconds to traverse and a larger group of 60 individuals can approach the 120 seconds maximum. These timings can fluctuate, most likely due to the load and capacity of the underlying compute that Microsoft puts at the disposal of Flow.
There are two Flows, the “Get Manager Org JSON” is currently the main entry point, which pulls the manager details, and then calls the “Get Direct Reports JSON” Flow to get the reports, and that has the logic to call itself multiple times to get all the levels of reports below.[/et_pb_text][et_pb_image src="https://3bb4f13skpx244ooia2hci0q-wpengine.netdna-ssl.com/wp-content/uploads/2020/06/Two-flows.png" show_in_lightbox="on" _builder_version="4.1"][/et_pb_image][et_pb_text _builder_version="4.1"]
We could look at some form of caching. If the Flow is traversing an organizational hierarchy and along the way encounters a manager has already been mapped out, we could just reuse that. So, if we start with lower level managers, save their results to say a SharePoint library, then assemble that existing JSON into any calls above that manager we can avoid doing the traversal from scratch.
We start by adding a step (1) to our “Get Manager Org JSON” Flow, which will save the output of the JSON to a SharePoint site. We can use the manager name as the filename to be able to reference that same JSON later.[/et_pb_text][et_pb_image src="https://3bb4f13skpx244ooia2hci0q-wpengine.netdna-ssl.com/wp-content/uploads/2020/06/Save-to-SharePoint.png" show_in_lightbox="on" _builder_version="4.1"][/et_pb_image][et_pb_text _builder_version="4.1"]
Then in the “Get Direct Reports JSON” Flow we add the corresponding conditional within the loop that if the direct report is a manager and that manager already has the JSON saved to the SharePoint site (2), simply read that JSON rather than traverse. Otherwise if no saved JSON, then traverse.[/et_pb_text][et_pb_image src="https://3bb4f13skpx244ooia2hci0q-wpengine.netdna-ssl.com/wp-content/uploads/2020/06/Read-from-SharePoint.png" show_in_lightbox="on" _builder_version="4.1"][/et_pb_image][et_pb_text _builder_version="4.1"]
The larger group which took ~120 seconds to traverse, is now completed in under 10 seconds when assembling 5 pre-populated org charts for the direct reports. This is saves time tremendously with a simple tweak to save results to a SharePoint site. More importantly, this shows how several of the Microsoft 365 features can be joined up to solve an interesting programming challenge.[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]
There are many paths to take on the road to strong security. Our goal is to help you minimize and ultimately eliminate the risk of losing personally identifiable information and, of course, making sure it's all complaint to ever-changing regulations. These are vital ingredients for enterprise-level business operations, especially when adapting to the COVID-19 crisis. With companies becoming increasingly focused on the stay-at-home environment, protecting data and information has never been more important.[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]