Big Data Projects With Stackable In The Ionos Cloud

0
Big Data Projects With Stackable In The Ionos Cloud

The start-up Stackable offers big data solutions in the enterprise cloud operated by Ionos. In contrast to the offerings of the hyperscalers, Stackable’s offering is based on an open and flexible platform that consistently relies on open source components. Customers can choose which tools they want to combine in which version. This should make it possible to create tailor-made, open-source-based solutions that do not bind the customer to a specific provider.

“The open-source market for big data platforms has changed significantly since 2021,”. “Cloudera has put all its products behind a paywall.” After Cloudera changed its offering structure, there is no longer an open, free, all-in-one big data solution. This process joins a long list of license changes in recent years. Just look at what happened to Redis, MongoDB, and Elastic.

“Elastic is no longer open-source software since it switched its license from Apache 2.0 to a Server Side Public License (SSPL) and the Elastic License,”. A good opportunity, then, to return to the old virtues of open source and, at the same time, reassure yourself of the advantages offered by the Gaia-X project.

“Ionos participates in the Gaia-X project at the Federal Ministry for Economic Affairs and Energy (BMWi) invitation.” “We are working together with the community and as part of the Technical Committee to achieve true interoperability and portability through infrastructure standardization – also a basis for a data ecosystem. In this infrastructure, data can be stored, processed, and exchanged trustingly, and entire ecosystems for solutions can be created.” This is very important concerning data sovereignty. “Open source software is a key to digital sovereignty – always based on values ​​such as openness, data security, data protection, etc.”

With Gaia-X To Data Autonomy

Although Gaia-X is open to offers from hyper scalers, it is also a European alternative to their recommendations, primarily designed to retain customers over the long term. “Although hyper scalers deliver big data infrastructures from the cloud, there are several hurdles before getting this infrastructure up and running: the individual organization, the APIs, the security, etc..” Therefore, it is not surprising that Ionos, with its infrastructure and Stackable with its big data solution, are more aimed at medium-sized and smaller companies that want to see their requirements and wishes implemented. Medium-sized companies, in particular, depend on differentiation from the competition.

Therefore, the essential task of the stackable service provider is to bring together knowledge, tools, and newly required interfaces.”Stackable also offers its customers system and service integration or composition as a service, either consulting or self-service (best of breed).”

“We write software that knows how to deploy, monitor, and manage other software.” “Our knowledge primarily relates to open source and big data, i.e., Apache Hadoop and the ecosystem that has developed from it with components such as Apache Kafka, Apache Spark, Apache NiFi, etc.” The knowledge is in operators.

operators are “patterns” primarily used in the Kubernetes ecosystem. “Operators ensure that a site’s software works reliably.” They codify human operators’ knowledge of how an application should work, how it should be monitored, and how it should be used.

All components of a big data solution should be cast in code for reuse. “Kubernetes nodes, for example, come from a provider like Ionos as an IaaS service,” “Data assets come from Gaia X cloud providers, for example, from the Weather Service (DWD).” Integrating additional services is very complex and annoying, such as log aggregation, metrics collection, security, etc. “Therefore, we hope to provide Gaia-X standards, such as Federated Services.”

Stackable, Ionos, And Gaia-X

Stackable itself supplies its enterprise customers and service providers with infrastructure as a code, operators and agents, etc. the seamless integration with Gaia-X allows remote infrastructure execution; secondly, implementation in hybrid scenarios is public-private or in the multi-cloud.

Running in the cloud also enables managed services. Another step is to make SCS, i.e., the Sovereign Cloud Stack, compatible. “Individual members of Gaia-X have joined forces to develop the SCS based on OpenStack.” The SCS as a technology stack based on OpenStack is an open-source cloud orchestration tool plus a cloud operating system.

In addition to the SCS, other cloud stacks are Gaia-X compliant. This includes the IONOS High-Performance Stack, which is also based on open source and has been tried and tested on the market for more than ten years. The SCS, on the other hand, is still under development and still has to prove itself under full load.

“It’s not about creating new standards for metrics, logging, and similar topics necessary for operating an enterprise-capable platform. Rather, the goal is to integrate existing solutions as possible,”. Examples of such standards and integrations are OpenTelemetry, “OpenCensus,” or Open Policy Agent.

Summary

Because it only uses open standards and tools, Stackable does not require customer disruption but instead fits into its IT landscape with an open-source license.

By using open-source software and integrating existing standard solutions, Stackable tries to cause as little customization work as possible for the customer. “This goal is not for the customer to bend their surroundings to suit us. Rather, we want to be so flexible that we can dock onto existing integration points at the customer,”.

Stackable sees the primary added value in the knowledge that operating open-source software is supplied as code and no longer has to be built in-house at the company. This allows Stackable customers to concentrate fully on the value-added use cases for which the company only provides the platform’s basis.

ALSO READ: Oracle Tunes Cloud Analytics Capabilities

Oracle Tunes Cloud Analytics Capabilities

0
Oracle Tunes Cloud Analytics Capabilities

Data analysis has long been one of Oracle’sOracle’s core competencies. But only now does the former database specialist push this line, of business into the spotlight. The provider’s release 6.0 of the analytics platform OAC (Oracle Analytics Cloud) scores with a shorter analysis workflow, a more convenient user interface, and many connectors for different data sources.

The announcement conjured up the proverbial jack of all trades: the “new generation” of OAC enables all employees of a company to analyze data; it is suitable for all applications, and it can access all the data it needs, promised TK Anand, senior vice president of Oracle Analytics. This accelerates the journey from raw data to insight into action, making the organization more agile and profitable.

Version 6.0 of the SaaS platform comes up with a new user interface called “Redwood” and many innovative functions that may justify the buzzword of the “new generation”. According to Björn Stehn, Business Development Director Oracle Analytics & Data Innovation, OAC has already gone through two development phases: “First we moved analytics with specified, i.e. quasi curated data to the cloud, then we brought the topic of self-service forward, and in the third step we are now adding augmented analytics.”

As Stand further explains, Gartner’s market research company understands augmented analytics to mean functions that enable the user to draw more substantial insights from the analyzed data than before, for example, by enriching it with additional information or by digging deeper.

These features include, among others:

  • “intelligent” processing of the raw data;
  • natural language processing for input and output;
  • analysis of graph networks;
  • Presentation of the results at different geographical levels;
  • association and “market basket” analysis for marketing purposes;
  • Integrated text analysis;
  • deeper integration of machine learning models into the analysis workflow;
  • the transparent weighting of ML algorithms.

Consistent User Experience

With “Redwood”, Oracle wants to set the future standard for the user interface of all its products. The visual “design language” should always give the user an identical “experience” – regardless of whether they are using the web browser or the mobile app.

The newly designed app shows the user, for example, the visualization of the analysis results via dashboards. In addition, with their help, the findings can be shared with the whole team.

The new functions for processing “natural” language also fall into the “user comfort” category. On the one hand, they allow relatively informal inquiries in 28 different languages, including many abbreviations and synonyms. In addition, the “Natural Language Generation Engine” also enables the output, i.e. the presentation of the analysis results, in colloquial language. Those who prefer listening to reading can select the podcast–as the audio output in the app. The demonstration of this function as part of the product announcement sounded quite “natural”.

AI Across The Entire Analytics Pipeline

Integrating machine learning functions into the analysis workflow is almost state of the art. However, Oracle wants to go one step further. The desired “democratization of access to data analysis” means that not only IT but also end-users can use numerous “artificial intelligence” functions – across the entire workflow of preparation (Prepare), modelling, exploration, distribution ( Share) and use (Consume).

This starts with the “smart” data preparation: A “profiling engine” helps filter out defective or sensitive data and, if possible, provides information on how to repair it. Thanks to a built-in, expandable knowledge base, the user also receives suggestions for enriching the data.

There is a heated debate about whether ML models may contain prejudices that have not been adequately questioned. Are certain population groups being disadvantaged with the help of algorithms? Detailed information about the model, the implemented influencing factors, and their weighting is necessary to answer this question. With the new OAC version, Oracle allows probabilities in the results to be predicted and, if necessary, to correct the model – at least when it was created in the in-house Autonomous Data Warehouse.

Strengths And Weaknesses

For the Gartner analysts, Oracle is one of the “visionaries” in ​​analytics and BI platforms. It is documented in the “Magic Quadrant” for February. As market watchers have noted, Oracle is investing “aggressively” in augmented analytics, delivering such capabilities earlier than many of its competitors. The platform also offers high automation, such as self-generating insights. The user support and the integration into the Oracle product range also receive praise.

According to Gartner, the latter also reveals a weak point: OAC is firm “Oracle-centric”. This means that the platform cannot also analyze external data sources. On the contrary: The list of available connectors includes a wide variety of data storage systems – starting with Excel and Dropbox, through third-party database systems from the proprietary and open-source environment, including those from Amazon Web Services (AWS) and Google, to less widespread one’s Analytics tools and popular apps like Salesforce.

Instead, Gartner criticizes the close relationship between the Oracle applications and the OAC platform at development. The Fusion Analytics Warehouse (FAW) toolbox can only be used with Oracle’sOracle’s enterprise applications. To achieve equivalent functionality, other users would need to create their applications using OAC. That is why Oracle will probably primarily address the existing customer base with OAC.

ALSO READ: Data Gateways – And The AI ​​Benefits From Preparatory Work

Data Gateways – And The AI ​​Benefits From Preparatory Work

0
Data Gateways - And The AI __Benefits From Preparatory Work

Data Gateways: Modern companies rely on data analysis systems equipped with AI components for decision-making. The more these tools intervene in management processes, the more their demands on the quality of the source data resemble those of human decision-makers. Data gateways can assist them ideally in the processing of information.

A Google search for “data analysis” and “assistance” quickly turns up many job postings. A financial service provider, for example, who supports its customers in “analysis, strategy definition, negotiation, decision-making, communication/implementation and monitoring” of business operations, is looking for a person who deals with “system recording and analysis of annual financial statements, business evaluations, restructuring concepts and the preparation of balance sheet and cash flow planning” and does “industry-related research.”

Analysis Requires Processing

Pay attention to the points “system acquisition” and “research.” What is required here is the arduous task of consolidating disparate source data and gathering additional information that places the results in a valuable comparative context. The managers at the customer and the service provider want a basis for decision-making that is understandable for them, enables them to classify their position compared to competitors and customers, and delivers results that can be quickly and comprehensively communicated to other stakeholders.

Artificial intelligence that supports management in business decisions places similarly high demands on the initial information as the employer from the example above directions for his primary material and its preparation. If in extreme cases, an AI that takes on the corresponding tasks were only fed with numbers, in the end, only trends would come out in the form of numbers, and the charge of concrete interpretation and communicative processing would then again lie entirely with the human decision-makers.

Science fiction fans may rightly be thinking of Douglas Adams’ novel “The Hitchhiker’s Guide to the Galaxy,” in which the largest supercomputer in the universe, after millions of years of computing, answered the question of “life, the universe and everything else.” “42” answered and left the perplexed protagonists alone with this answer.

Bringing it back to today’s everyday production processes, this would look like this: A production plant notes in its log files purely based on EAN numbers which components it has completed over a certain period and in what number. At the same time, a merchandise management system provides data on the raw materials purchased and the electricity consumed, and the internal accounting system provides information on the workload. The first two systems mentioned at least write rudimentary context information – about the raw materials, for example, the suppliers – while the third is even too informative because it garnishes the team utilisation information with personal data by default, which is irrelevant in calculations for business forecasts due to data protection regulations have to look for.

Without Context, AI Comes To Nothing

A typical requirement in this environment could be to derive from the data described whether the personnel planning, the purchasing quantities, the supplier selection, and the contracts with the energy providers are future-proof for the company concerned. However, for analysis software to be programmed in such a way that it delivers the desired results directly in a dashboard, apart from the formal normalisation of the data and the structure of the communication channels – data is enriched at a crucial point and left out at others: the EAN numbers of the production line should be added to the EAN numbers of the production line, which raw materials are required for which component and how many screwing or welding processes are necessary for the final production. All information that could allow assignment to individual employees should be removed from the personnel utilisation data.

This could be achieved by adjusting the source systems, but this would burden the relevant assets with additional, unfamiliar tasks. In addition, further evaluation interests could quickly come into play, which then entails the implementation of different unique interfaces – until, in the end, there is an unmanageable number of two-way relationships that require several agents running in parallel on each device involved.

Tireless Transformation

The problem can be solved much more flexibly and efficiently if an organisation provides its highly advanced analysis solutions and AI instances with dedicated, specialised assistance similar to that of human analysts. A “data gateway” can take on the role ideally. The data gateway from the manufacturer Crib, for example, could cover the requirements with the following functions:

  • Enrichment: During data transfers, the gateway can add exactly the information that the source system does not provide but desires in the target system. Enhancements of this kind are also crucial in the security area, where the report of a possible attack on a system must be quickly linked to information on its risk context and potential vulnerabilities.
  • Reduction Of The Data Volume: Not only the addition but also the subtraction of data is possible. This reduces the volume of data transferred in the company, and the processing speed for all analysis projects can increase.
  • Central And Event-Based Routing: The user can set up rules according to which the data gateway dynamically changes destinations for a data transfer depending on the content or other characteristics of the information supplied and, for example, alternately or simultaneously software such as Splunk, Elastic Search, InfluxDB and security monitoring -solutions provided. Existing agents (Splunk Universal Forwarder, Beats Agent, and so on) can continue to be used – as long as they are still needed. This way, a step-by-step or limited-scope implementation of the data gateway is possible.
  • Encryption: A data gateway can also meet the requirement of standards such as the GDPR or PCI DSS (security standard of the banking industry), mentioned in the example, to make personal data accessible only where a particular interest in it can be proven. It then encrypts the information on the fly, preserving it for any forensic investigations that may be necessary but ensuring access is only granted in suspicious cases.

Conclusion

AI accelerates data analysis in heavily digitised and automated business processes. Suppose you have a data gateway at your side as a fast, tirelessly assisting data gateway. In that case, this increases the accuracy of the analysis and the significance of the results – and the user gains the opportunity to be able to implement new analysis ideas without great effort.

ALSO READ: How Storage Affects The Success Of Data Analytics

How Storage Affects The Success Of Data Analytics

0
How Storage Affects The Success Of Data Analytics

Modern data analytics can be a challenge when the underlying data platform can’t keep up – why modern storage technology, is more critical now than ever. Data has never been more critical to businesses than it is today. Data drives connected cities, influencing business decisions in real-time and making every customer interaction a personal and tailored experience. However, many companies still do not even know how to use their data sources.

There is often a lack of a strategy for collecting all available data, analyzing it, and making holistic decisions based on it. It often fails to integrate the previously separate data silos into one data environment. Therefore, an adequate data storage and delivery strategy are critical to getting the most value out of the data.

Changing Data Usage Places New Demands On The Infrastructure

The global volume of data is exploding. According to one forecast, it will grow to an unimaginable 175 zettabytes by 2025. IDC made this prediction despite the events of 2020 that have caused consumer data usage to skyrocket.

The way data is used has changed. Cloud applications, mobile devices, social media platforms, and IoT ( IoT ) sensors contribute to data growth. Each of the storage workloads required for this fulfills essential tasks. Applications that rely on file storage have become more sophisticated than database applications. The technical silos typically seen between these workloads have created challenges that can no longer be ignored. Businesses are stuck without a data platform that can keep up with today’s analytics tools.

The key to better data analysis is modernizing the IT infrastructure. Companies that make targeted investments here are well-positioned for modern analysis requirements. This is the only way to derive valuable insights from big data projects and use them strategically. The aim is to improve operational efficiency and the user experience and compliance, regulatory and security processes, develop new products and generate new revenue streams.

One Step Ahead Of The Competition

In a recent research project, Pure Storage partnered with the Enterprise Strategy Group (ESG) to examine the benefits for companies investing in analytics versus those not. The study showed that companies with mature data analysis capabilities are one step ahead of the competition. Compared to companies that don’t leverage data analysis skills, these companies were 3.2x more likely to perform better on customer satisfaction, 2.4x more likely to increase revenue per employee over the past two years, and 2.7 times more likely to have had a shorter time to market.

The critical component here is a high-performance flash-based cloud-ready solution that supports fast file and object storage. This provides the technological basis to support diverse, robust, and mature analysis programs with real-time or streaming analysis. This makes it possible to achieve positive and powerful results beyond increased sales potential.

Guide To The Implementation Of A Modern Data Strategy

Pure Storage recently published a guide to help organizations achieve their goals for modern data analytics. He takes an in-depth look at five areas where data storage solutions must offer advanced capabilities to enable organizations to evolve their analytics maturity. The most important findings are:

High-performance Storage is the path to faster insights. Companies that gain faster insights with real-time analytics can also act faster on potential business opportunities. A high throughput storage solution can support many users with massively concurrent connections and consistent performance at scale.

Agility helps keep up with changing data and applications. Workloads and operational requirements are constantly changing, in some cases hourly. Administrators are too busy with day-to-day routines, leaving them time to adapt storage systems to constant changes manually. Also, this process is anything but agile—and by no means the wise use of data architects’ precious time. A better approach is a cloud-optimized storage solution that supports variable data patterns and can scale with multiple workloads.

A cloud-optimized infrastructure is a must for running modern analytics applications. Organizations can enhance analytics capabilities with an on-premises storage foundation that offers the benefits of the cloud while operating on-premises. One of the critical features of a cloud-optimized storage solution is unified fast file and object Storage, available today as Unified Fast File and Object Storage, or UFFO for short. This enables analysis teams to achieve higher query performance and deliver insights faster.

Match data storage to usage model to keep costs down. Ideally, organizations only want to pay for the IT resources while scaling the storage platform up and down to meet ever-changing business needs. Subscription models for non-disruptive storage environment upgrades can help to stay up-to-date while keeping costs low.

Minimize the impact of outages and maximize uptime. The goal of data-driven companies is to be able to act on essential insights at lightning speed. As a result, they cannot afford planned or unplanned downtime for the storage solutions that support their advanced analytics applications.

It Depends On The Storage Solution

Data analytics offers excellent opportunities. With the right qualities in a data storage solution, organizations can empower more people—from developers to data scientists to generalists—with the modern tools they need to work with big data. The resulting insights are beneficial for making better and faster decisions.

Contemporary data platforms can help increase the maturity of data analytics and address the most demanding demands of modern data usage. The UFFO storage approach scores with cloud-like simplicity and agility while maintaining high performance and control over the data environment.

There are some challenges on the way to effective modern data use, but the technological barriers are rapidly falling. Today, when businesses are drowning in data overload, they can turn to storage solutions to meet the challenges. The possibilities are almost unlimited for modern, data-driven companies with more mature analytical skills and who are advancing confidently in this discipline.

ALSO READ: Building A Cloud Data Platform For Analytics And AI

Building A Cloud Data Platform For Analytics And AI

0
Building A Cloud Data Platform For Analytics And AI

Artificial intelligence is a trending topic that more and more companies want to use. Intelligent and automatic data evaluation, is of particular interest. However, successful use of machine learning requires pervasive data sets because the AI ​​model is trained with these over many iterations to reliable output results ultimately.

But what does the IT architecture behind it have to look like? After all, it must be able to process the sometimes vast amounts of data and scale quickly. This is anything but a trivial matter, so a conventional architecture is no longer sufficient. Instead, innovative data platforms are needed for this new type of digital application. Below we present an overview of the structure of such an architecture, which we developed in a customer project using the Google Cloud Stack.

Challenges In The Introduction And Application Of AI-Supported Data Analysis

The first challenge is the scaling of the IT infrastructure concerning the amount of data. In the next three to four years, an increase of about fivefold is expected. The IT infrastructure that is to house an AI solution for data analysis must therefore be designed for growth from the outset. The increase in continuous data streams—up to 25 percent overall—makes stream processing preferable to batch processing. This often entails a technical change.

To keep up with this, companies have to set a new course concerning the IT architecture and the entire organization. To benefit sustainably from data analyzes of business processes, it is not enough to examine the data pools of isolated silos. Instead, the organization must adapt to a “data culture,” connect previous silos and feed data from all company areas to the AI.

A large part of the data that will flow into analysis processes in the future will be unstructured data – for Example, images, video and audio files, or continuous text. It makes sense to store and process this data using non-relational (or NoSQL ) databases such as MongoDB or CouchDB. However, structured data in SQL databases will by no means lose their validity in the medium term. Therefore, the unstructured data must be combined and merged with structured data, representing an additional challenge.

In addition to all these challenges, know-how and human resources in AI/ ML represent a bottleneck. The organization and infrastructure must generate as much output as possible from as few input hours as possible. This works best with a central Enterprise Data Warehouse (EDW), the structure of which is shown in the next section. For the customer project mentioned, an EDW was introduced with this methodology.

A Central Enterprise Data Warehouse Accelerates Technological Change

To successfully move from a silo to an EDW infrastructure, the following approach has emerged:

  • Migration of the existing data lake or data warehouse to the cloud: A cost estimate for various architecture models for the EDW was prepared before the project. This concluded that a migration to the cloud could reduce the total cost of ownership (TCO) of a data warehouse by more than half compared to the on-premises option. From an economic point of view, it is also interesting that no capital investments are necessary, but only operating and minor administration costs are incurred for the cloud. Predefined migration scripts help make the transition easy – in our example project from an on-premises solution with Teradata to Google BigQuery.
  • Breaking down the silo structure, exposing the analytics capabilities, and building a data culture across the organization: Companies generate data in various silos and channels. The fragmentation of the silo landscape is constantly increasing in the course of digitization because each department uses its software. These are often also obtained via a software-as-a-service model so that the data has to be transferred from the provider’s databases to the provider’s systems via interfaces. The data from the silos must first be centralized in the EDW and then made available to all of the company’s stakeholders in a decentralized manner. To enable AI and data-supported business decisions at all levels, employees throughout the company also need the appropriate access. In the central platform, all processes are bundled and examined holistically,
  • Introduction of context-related decision-making in real-time: Two factors are decisive for a profitable business decision: On the one hand, the execution time or latency, on the other hand, the data context. Above all, spatial data – for Example, where a request comes from – is essential for understanding the analyzed events. Using geographic information systems (GIS) combined with AI was a necessary goal in our implementation example with BigQuery. The advantage of this approach is that data can be streamed into BigQuery and further into a SQL database in real-time. During the streaming process, AI analyzes are possible in real-time.
  • As with almost all software solutions, it is also necessary with AI to decide between an in-house development – on the Example based on open source frameworks – and purchasing a ready-made solution on the market. However, buying pre-trained AI models makes little sense because they usually do not cover the desired use case. All bids should be scrutinized to ensure they meet the required performance criteria. In principle, integrated solutions can save a lot of time and effort that would otherwise be necessary to develop interfaces between different services.
  • Unleash data-driven innovation by deploying an appropriate AI solution: Finally, the AI ​​platform brings valuable insights from the data. It makes sense to divide them into three types. “Out of the box” AI is well suited to optimizing data-related business processes in a Customer Interaction Center (CIC). However, these are standard solutions that do not offer significant competitive advantages. Although not yet wholly individual, the second type is more likely: an AI model assembled from ready-made module blocks. This usually fits the task of generating insights from the company’s data. The third type is the most demanding, namely the individual AI model. This is trained from scratch using your own data sets. It takes a lot of time and effort to do this. However, the procedure developed here is unique and can open up a noticeable competitive advantage. The division into these three described types of AI makes it possible to distribute scarce human resources sensibly.

Once all five steps have been completed, the user company receives a powerful solution to gain decision-relevant knowledge from all data streams.

Legacy Systems And Data Quality And Access Are Common Obstacles

A few obstacles usually need to be cleared on the way to the EDW. First, there are legacy systems, which are relatively expensive to modernize and maintain. This limits scalability so that the infrastructure cannot withstand the rapid growth of data. Therefore, the question must be asked: Are the existing systems even able to support AI and ML solutions? Is the effort involved in running and “tuning” them reasonable given the insights they end up generating?

But not only in the infrastructure but also in the process of data collection, some obstacles need to be overcome. Excessively restrictive data protection and security regulations can significantly limit the necessary consolidation of data streams. In addition, the data sources are often not suitable for permanently storing or feeding in the current data. However, AI insights are only as good and extensive as the available database. Therefore, data quality is the fundamental success factor for any AI strategy.

Building A Scalable Data Platform With AI

Our practical Example of a data platform that enables AI analysis functions is based on Google Cloud. However, it could also be built on a comparable provider’s cloud stack, such as Amazon Web Services (AWS) or Microsoft Azure.

The platform is orchestrated according to Continuous Integration / Continuous Delivery (CI/CD) principles. This overcomes previous integration problems so that the developers involved can seamlessly integrate their code into the existing one. Automation comes into play in almost all phases of application development. The following diagram shows what this can look like in practice:

Such a CI/CD pipeline creates a continuous stream of data that leads to insights for the relevant decisions. The solution can react to changes in nearly real-time and take into account feedback loops. For Example, this makes it possible to implement “early warning systems” that enable decisive action in the event of rapid changes.

Finally, it should be mentioned that business analytics is not a purely technical task and that AI/ML models do not lead to results “by themselves.” The contextualization of analysis results and their understanding as a basis for decision-making are still with people – more precisely, with management.

Nevertheless, companies that invest in the appropriate infrastructure today will be able to use the insights from AI analysis for themselves sooner. Over time, your competitive advantage over those competitors who do not want to or cannot raise the data treasure in their company will continue to increase.

ALSO READ: Companies Should Pay Attention To When It Comes To Radio Technology

Companies Should Pay Attention To When It Comes To Radio Technology

0
What Companies Should Pay Attention To When It Comes To Radio Technology

Security, Costs, Or Range: Radio technology is used if sensors are networked wirelessly for the Industry Internet of Things (IIoT). But not only the type of radio connection plays a role. We show you what to look out for.

The Internet of Things ( IoT ) and, in particular, the Industry Internet of Things ( IIoT ) lives not only from the fieldbuses to connect sensors and actuators. It is mainly sensors connected to the Internet of Things.

The traditional way to connect the sensors is via cables, the so-called fieldbuses. This makes sense and is preferable in industrial plants or environments prone to interference. Wireless technology is ideal wherever the sensors are difficult to reach or if the systems need to be connected over long distances.

It’s All A Question Of Radio Technology

Which wireless connection technology is used depends on various criteria. Entrepreneurs should ask this question when they use the right IoT connection. All systems have their respective advantages and disadvantages.

Whether low power wide area networks (LPWAN), classic mobile communications, or your building networks: the range of radio technologies on the market is diverse. It is always important to check the application for which the respective radio technology is to be used. Together with ECS, we have listed eight essential points that can help with the choice of wireless technology.

  • Data: Based on the chosen use case, companies should ask how much data should be transferred, how quickly, and how often. Because the daily sign of life from a fire detector places completely different demands on a connection than the continuous monitoring of production systems or medical devices concerning a wide variety of parameters, such as temperature, noise, or vibrations, to be able to react as quickly as possible to problems, low latency times play a decisive role. Equally important can be the need for a bidirectional connection to transfer log files or software updates and individual device data.
  • Range: The available radio protocols vary significantly regarding the maximum distance between transmitter and receiver. The required field depends crucially on the intended use. If there are many sensors within a manageable length, for example, in a factory or warehouse, mesh networks are an option. If a city or an industrial park is to be covered, long-range LPWAN networks are more suitable.
  • Place Of Use: The location or position of use of the IoT devices is closely related to the range aspect. A transmission standard with high building penetration is required if the appliances are in underground garages or basements, such as narrowband IoT (NB-IoT). If, on the other hand, the sensors are mobile, such as when monitoring a vehicle fleet, this requires reliable wireless technology with high network coverage. This is where the classic mobile phone comes in handy.
  • Energy Consumption: IoT devices and sensors often have to do without a fixed power supply. Many are battery-operated, mobile, or used in places difficult to access. For them to work for several years without changing the battery, it is, therefore, all the more critical that the selected radio protocol works as energy-efficiently as possible. This is precisely what LPWAN technology specially optimised for IoT scenarios, such as NB-IoT, LoRaWAN, Sigfox, or Misty, can do. In addition to their low energy consumption, they also shine with their long ranges. Reductions were made on the bandwidths. However, this is not important in scenarios with small amounts of data, such as data from a temperature, pressure, or frequency sensor, intelligent electricity meters, or networked garbage cans that only report when they need to be emptied.
  • Availability: The best radio technology is of no use if it is not stable and scalable. It is, therefore, necessary to clarify how the network coverage looks in the planned area of ​​application and how market-ready the individual radio technologies are. 5G or NB-IoT, for example, are still under construction. LoRaWAN is also not available everywhere and is therefore ruled out in scenarios such as theft protection of construction machinery. Some frequency bands are only available in some world areas, while others can be used license-free across several continents. In the case of international use cases, such as container tracking, it must also be taken into account that specific radio standards are not permitted in some countries, so alternatives must be provided immediately.
  • Future Security: The connection technology should be selected with foresight: How likely is the provider still existing in five, ten, or twenty years? What if the network operator shuts down older networks in favor of the next generation of mobile communications, as is currently the case with 3G? Can the preferred wireless technology grow with new devices, additional use cases, and new business models? Open standards, a large ecosystem, and a high degree of dissemination of the technology speak for future security. Because that usually leads to the long-term availability of hardware, software, and experts who can still provide support years later.
  • Security: Wireless data transmission involves higher security risks than wired communication of classic fieldbuses. Therefore, companies should pay special attention to security mechanisms. Important aspects are the encryption method used, the options for authentication, and the integrity mechanisms offered to protect against data manipulation.
  • Costs: Finally, it is essential to look closely at the total costs of the different radio technologies. In addition to the acquisition and installation costs for hardware modules, this also includes the ongoing operating expenses, including maintenance. A significant item here can be the network usage fees. Depending on the radio technology used, these can vary in height.

ALSO READ: Mobile Edge Computing – Calculate & Evaluate Where Data Is Generated

Mobile Edge Computing – Calculate & Evaluate Where Data Is Generated

0
Mobile Edge Computing - Calculate And Evaluate Where Data Is Generated

Process data where it occurs. In contrast to cloud computing, the computing load in mobile edge computing moves directly, to where it is happening. Sufficient bandwidth and low latency are prerequisites.

The big Internet companies all have their data centers. And because the large data centers bear a lot of resemblance to large farms, they are also called server farms. These are gigantic server farms. The most extensive data center in the world is in Langfang, China, and has an area of ​​585,289 m²; Portugal Telecom operates the largest data center in the EMEA region with 74,322 m² is located in Covilhã, Portugal. Many large data centers are in the USA, but there is also a commercial data center in Germany in Frankfurt am Main.

Vast amounts of data are analyzed and evaluated in the data centers. The computing load of a data center takes place at a central location. But in times of networked industry and increasing networked traffic – keyword V2X and the communication of vehicles with their environment – data traffic will continue to rise sharply in the coming years.

A data center is not always and not everywhere in the immediate vicinity. If data is to be evaluated quickly on-site on a machine or in a vehicle, appropriate computing power is required at the location where the information is generated. This is where edge computing comes into play. Because large amounts of data are generated in networked industry or mobility. The keyword is the Internet of Things ( IoT ).

What Is Meant By (Mobile) Edge Computing

With edge computing, the functions of cloud computing and the IT service environment migrate to the edge. So at the place where the computing power is needed. In other words: computing power, applications, data, and related services are shifted from central data centers to the application.

This makes real-time processes possible. Sufficient bandwidth and low latency times are essential here. The best-known and most important applications of edge computing include, for example, predictive maintenance and quality assurance.

Mobile Networks And Mobile Edge Computing

Thanks to a well-developed nationwide mobile network, edge computing is changing to mobile edge computing. We are now talking about Multi-Access Edge Computing (MEC). The MEC grew out of the ETSI (European Telecommunications Standards Institute) initiative, which initially focused on placing edge nodes in the cellular network but has since expanded to include the fixed network (or eventually the converged network). A MEC network can be built not only from one or more cellular networks but also from WiFi and cable-based networks. It thus covers all available network technologies.

In February 2019, the IT service provider Cisco Systems published a forecast for the monthly data traffic from portable devices in mobile networks worldwide from 2017 to 2022. According to Cisco, M2M data traffic in mobile communications will grow to more than 1,700 petabytes per month by 2022. A lot of data is already flowing via mobile phone networks today. The German mobile phone providers alone had a data volume of around 3.97 billion gigabytes in their networks in 2020. tendency further increasing.

IoT and edge computing enter into an extraordinary symbiosis. A typical IoT device sends, receives, and analyzes data in a continuous feedback loop. The data is diagnosed either by humans or using machine learning methods. The so-called edge devices are the data-generating devices that collect and evaluate data.

What Mobile Edge Computing Means For Companies

Companies use the Industrial Internet of Things ( IIoT ) to collect as much machine data as possible. The aim is to control production precisely and better and keep expensive downtime to a minimum. It is possible to generate, analyze, and ultimately store high-quality data directly at the measurement location with high-performance edge devices.

Relevant data can only be meaningfully collected if they are pre-filtered. This is only possible with sufficient computing power at the edge.

Multi-Access Edge Computing And The Autonomous Vehicle

There is no “killer application” for MEC. A good example is networked traffic (V2X) mentioned above. For instance, in autonomous vehicles, MEC can provide information about road infrastructure, the position of pedestrians/other cars/animals, or weather conditions directly to the vehicles. An interface to central cloud servers is no longer necessary.

A combination of MEC with artificial intelligence ( AI ) and machine learning ( ML ) methods enables autonomous vehicles to perceive their surroundings in real-time. Low latency is a requirement. Because only if data is analyzed and evaluated on-site (at the edge) is it possible to operate autonomous vehicles safely.

Augmented And Virtual Reality Rely On MEC

A comparatively large amount of data is generated with augmented (AR) and virtual reality (VR). For example, field workers at industrial plants are enabled to maintain and repair machines and systems on-site. The respective field worker is then connected via MEC via a headset or mobile device and can have extensive information on a specific procedure that he is currently repairing shown on display.

Another example is complex 3D models. The computing power on mobile devices is insufficient to render the models or create them in the cloud. Because the latency is too high for that, with MEC, the data can be processed, and the 3D models are rendered outside the device. Other application scenarios for MEC are the gaming industry (cloud gaming), drone detection, and video analysis.

Multi-Access Edge Computing And 5G

Low Latency And Bandwidth: This characterizes the 5G mobile communications standard. For this reason, MEC was integrated into the overall 5G concept from the start. Only with MEC can the various 5G application profiles mMTC (Massive Machine Type Communication), eMBB (Enhanced Mobile Broadband), and uRLLC (Ultra-Reliable and Low Latency Communication) be fully implemented.

It can also make sense not to carry out performance-intensive computing activities in an end device. So in a vehicle, a machine, or an infrastructure component. An edge computing device is within a network so close to where the data is generated that response times of a few milliseconds can be achieved.

The edge computing device preprocesses the data between the end device and the cloud. This makes it a secure and standardized environment for implementing customer-specific tasks. This ensures that mobile end devices do not become more and more complex despite the ever-increasing range of functions and that energy consumption, weight, and, last but not least, costs can be minimized.

On average, cars stand still for 96 percent of their lifetime – this is where the load distribution advantage of edge computing becomes very clear: you don’t necessarily have to install expensive components in every vehicle but instead shift the logic to an edge computing device that can serve a large number of cars.

Another example of 5G in action is agriculture: 5G enables one sensor per square meter of a field to transmit real-time data on soil moisture. This makes it possible to irrigate fields in a targeted and precise manner.

More and more applications rely on mobile data processing on site. In addition to the corresponding computing power, this also requires broadband networks such as 5G.

Five Advantages Of (Mobile) Edge Computing

The small on-site data centers enable autonomous vehicles and digital intelligent cities or the Industrial Internet of Things (IIoT). Five main points can be summarized here:

  • Availability And Stability: With mobile edge computing, users are independent of constantly available internet connections. In-network disruptions or failures, you can even work offline.
  • Speed ​​And Latency: Low latency is significant in critical environments. After all, even short-term delays lead to disruptions or failures. There are no latency-critical data transfers between the edge and the data center.
  • Security: A critical point weighs here: Critical data from companies or personal data no longer get into the cloud. At the same time, edge computing leaves the option of using cloud resources for aggregated, non-security-critical data.
  • Mobility: With 5G, mobile edge computing opens up new application options on mobile devices. The speed of 5G and the low latency times increase the resilience of edge computing, and mobile scenarios become possible.
  • Costs: Many data can be processed and stored on site. Mobile edge computing thus reduces network usage and bandwidth requirements.

ALSO READ: The Return Of The Multi-Purpose Database

The Return Of The Multi-Purpose Database

0
The Return Of The Multi-Purpose Database

In 2009, NoSQL introduced a new category of databases. They were highly specialised and initially seemed to herald the end of the dominance, of relational SQL databases, which had previously dominated the market almost unchallenged and lacked the specialisation capabilities to cover different needs and data types. But today, general-purpose platforms are making a comeback.

It is no longer just companies from the software industry that depend on the rapid development of innovative applications for the success of their business models. Whether airline, energy supplier, retailer, or fitness provider: A highly functional custom app is standard today and is expected by customers. Frequently used apps such as social media or streaming platforms define the expectation of responsiveness, optimization for mobile devices, preparation of desired content and information, security, personalization, and updates and insights in real-time. The “engine” behind these applications, which have long been part of our everyday lives, is data and their storage and processing in databases.

The forerunner of the Structured Query Language (SQL), on which relational data platforms are based, was already developed in the 1970s. SQL databases have long been the basis for data-driven applications. However, the increase in mobile devices and apps placed new demands on the agility of databases. The data collected for these applications is not limited to a limited number of attributes. The data sets have become increasingly unstructured and complex. The rigid structure of relational databases, consisting of rows and columns, does not solve this complexity and makes storing, analysing, and querying difficult.

The relational model reached its limits in the noughties. Ad hoc changes in the data structure to be stored or in their query patterns remained difficult. For example, architectures such as Large online retailers, with their massive amounts of data and their need for back-end data analytics, drove demand for a different approach.

Specialised Models: Competing With But Not Replacing SQL

In the wake of the NoSQL 2009 event, various new database types emerged: document databases, graph databases, columnar databases, in-memory databases, key-value stores, streaming platforms. All specialised solutions developed for specific use cases were initially only suitable for these. They all had particular advantages and disadvantages.

Relational databases, meanwhile, have not disappeared from the scene. Many companies still use them today, but their weaknesses slow down developers when it comes to querying, analysing, and effectively processing the ever-growing flood of data. Of the new models, document databases are the most common alternative today. Its popularity is due to its flexibility, allowing many data structures to be represented. Objects are stored as documents with possibly different attributes. With the help of deep learning, these attributes or “tags” increasingly enable pattern recognition, which speeds up the finding of results in queries.

Document Databases: Most Popular SQL Alternative Among Developers

Instead of the relational model’s rigid row and column structure, document databases map their internal representation more directly to objects in code, eliminating the additional layer of mapping required by relational databases. The benefit of this similarity is that since they are based on JSON-like documents, they are incredibly intuitive for developers to use. JSON is a language-independent and human-readable format that has become a widely used standard for storing and exchanging data, especially web applications.

A significant advantage of document databases is that the structure of the data, the so-called schema, does not have to be predefined in the database and that ad hoc changes in this schema are possible at any time. Such changes can occur for a variety of reasons:

  • Source data is being delivered in a different format.
  • Newly enhanced information is being collected.
  • Recent queries are being sought to be supported.

Schema management is an essential part of working with relational databases, where the structure of the data must be pre-mapped to support queries about the relationship between different elements. The ability to support these changes without extensive database and application code rework increases the flexibility of developers.

Which Approach Does The Future Belong To?

The document database offers excellent potential for companies to use big data sensibly. Its vertical and horizontal scalability ensures that it “grows” with developing a business model and the growth in data volumes. However, relational databases are still common in web-based applications and online publishing. Specialised models are selected depending on the purpose and requirements. While it seemed that the future belonged to different, specialised databases, the trend is now reversing, as many companies are moving back towards the “generalist” models suitable for various use cases throughout the organisation.

There are multiple reasons for this:

  • Users of specialised databases are pushing for feature enhancements, so they don’t have to switch between different data stores because they want to consolidate existing datasets across organisational and technological boundaries.
  • Developers also want more integration. Nobody likes the relational structure as the only option back. However, instead of the effort involved in querying and analysing data within this structure, they are increasingly faced with the effort involved in integrating various specialised databases, each of which underlies specific applications. In extreme cases, this can mean that the higher integration effort nullifies the advantages of your agility in a particular area.

The Trend Towards Multi-Purpose Data Platform

Database providers have long recognized the trend reversal and are reacting: Their models are now much less highly specialised than in the early days of the NoSQL concept. MongoDB’s document database has provided data lake, charting, and other analytics capabilities since version 4.2 and time series capabilities since version 5.0 (2021). Different models are also expanding their solutions with new functions, thus moving more towards universal applicability.

This consolidation also has tangible economic reasons. The operating costs that arise from the maintenance of several specialised data stores are considerable and often lead to the emergence of operational silos that no company can use in the process of digitization: data that is stored in a particular format in a system must be converted, so they can be shared with other systems and the teams that use them.

Increased Safety Standards Are Easier To Meet

Data security and data protection are very high on companies’ agendas: the requirements for handling sensitive data have become increasingly strict in recent years, the increased number of mobile devices and access points due to remote work and the increasing shift of enormous amounts of data to the cloud has led to cyberattacks skyrocket with data theft. The higher the number of platforms, the more complex it is to meet compliance and security requirements. Each datastore must be secured separately. Its configuration must be checked for compliance with internal and external regulations. A multi-purpose data platform, on the other hand, provides a single point of control for security and compliance, in addition to operational simplification.

While specialised platform vendors add functionality to their databases to broaden their suitability for different use cases, general-purpose platform vendors add technical functionality simultaneously. This also increasingly eliminates the competitive advantage of specialised data storage and thus the need for complex maintenance. Modern application data platforms can support text search, time-series data processing, and analytics directly in the core platform without the need to integrate and set up data exchange with particular systems. While database vendors increasingly moved away from general-purpose workloads following NoSQL in 2009, the reversal has been evident over the past three years.

Conclusion

When selecting a single database, organisations should compare traditional relational data models and newer approaches. Instead, they should consider a comprehensive data platform whose underlying data model can support as many requirements as possible. The transformation in the database market is being driven by the demands of developers who need to meet the demands of end-users who want and need to derive business-relevant insights from their data.

The race winners will be data platforms that enable the delivery of desired functions. By definition, this goal requires a unified view of the data and various tools that can be used to analyse it. This is what modern application data platforms offer. So the future will likely belong to those solutions that combine the best of both worlds and can be used for a wide range of applications and provide specialised functions when required.

ALSO READ: Samsung Galaxy S22 Ultra In The Test: Or Is It A Galaxy Note?

Samsung Galaxy S22 Ultra In The Test: Or Is It A Galaxy Note?

0
Samsung Galaxy S22 Ultra In The Test Or Is It A Galaxy Note

The Galaxy Note series celebrates its unofficial comeback with the Samsung Galaxy S22 Ultra. An S-Pen, five cameras and a large, sharp display should make up what is probably the best smartphone of the year from Samsung. We tested the new Samsung flagship in everyday use and compared it with the competition.

Samsung Galaxy S22 Ultra: Design And Finish

The new Samsung Galaxy S22 Ultra design has little to do with the Galaxy S series. The device is quite large at 77.9 × 163.3 × 8.90 millimetres and quite heavy at 229 grams on the scales. The Galaxy S22+ looks pretty handy in comparison, but it’s not a small device in the smartphone world either.

The S22 Ultra is difficult to use with one hand due to its size, but the space is needed for the large display. The design of the Galaxy S22 Ultra not only looks like a Galaxy Note, but it is also a Galaxy Note. When I first held the device in my hand, I had strong memories of the Note 20 Ultra from 2020.

Angular aluminium with Gorilla Glass Victus+ on the front and back creates a premium feel with no manufacturing flaws. The buttons are straightforward to press. Despite the cameras protruding slightly on the back, the phone sits nicely on the table. Thanks to the IP68 certification, the device is protected against dust and water. I could only see the minor scratches on the back during my test period.

Samsung Galaxy S22 Ultra: Display

The device’s highlight is the 6.8-inch WQHD AMOLED display with 120 Hertz adaptive (1-120) refresh rate. The colours look great, the display gets very bright (up to 1750 cd/sqm), and the content runs smoothly. The significant disadvantage of the S22 Ultra compared to the other models is the rounded sides.

Swipe gestures and inputs with the S Pen are recognized less well, and the frame is noticeable. While previous models such as the Note 20 Ultra could not yet display WQHD resolution and the advertised 120 Hertz simultaneously, this combination only became possible with the S21 Ultra last year.

This is still possible with the S22 Ultra, which ensures a smooth experience, especially with mobile games such as PUBG or Asphalt. However, the higher resolution doesn’t make a big difference in the UI. It just drains the battery faster. That’s why I usually deactivated the mode and set it to Full HD+.

A second-generation ultrasonic fingerprint sensor is built into the display, which unlocks the device perfectly in nine out of ten cases. However, the speed has not changed compared to the S21 series. The radius of the fingerprint sensor has not been expanded either.

Samsung Galaxy S22 Ultra: Performance

To understand this section, a little bit of background knowledge is necessary, which I will briefly explain in this small section. Although Samsung sells the Galaxy S22 models worldwide, they are not identical everywhere.

For example, in the USA, Korea or India, a different processor is used, which we can also find in the new flagships from Oppo, OnePlus or Realme – the Snapdragon 8 Gen 1 from Qualcomm. However, Samsung has been using a completely different processor for years, one from its own company.

In recent years, Samsung’s Exynos processors have repeatedly been criticised: they are too slow and energy-hungry. Samsung should either improve it drastically or delete it because the foldable models of the Z series and the flagship tablets have long since installed the latest Qualcomm chips.

A Worse Processor

Samsung then delivered last year. Although the Exynos 2100 wasn’t as fast as Qualcomm’s chip, it was more energy-efficient and didn’t heat up as quickly. In the run-up to the successor, there was a lot of speculation. Integration of AMD was announced repeatedly, which should bring an enormous boost in the GPU.

But more or less, nothing came of it because the GPU in the Qualcomm SoC is still ahead of the Samsung processor in benchmarks. In other words: You get a worse processor for your money than in other markets, and that’s a shame.

Slight Stutters Again And Again: The Processor And The Software Of The Galaxy S22 Ultra

Reason enough to assume, however, that the chip should still let the device “run” in everyday life. After all, the Samsung Galaxy S22 Ultra price is 1249 euros for a high-end smartphone. But in my three weeks of testing with a rental device from Samsung, that was not the case.

The device or the processor suffers from software problems that have not been fixed by the time this review was written. There was an 800MB update at the start of sales, but there were always problems with the software with this one too. In addition, there are repeated minor stutters, which means that animations are not displayed smoothly.

The start screen froze twice for 30 seconds, and I couldn’t do anything. Touch inputs are sometimes recognized a few milliseconds later, making the operation feel asynchronous. You can easily mimic the effect of the Bluetooth problem with cheap headphones if the picture and speech are not in sync.

The Performance Still Has To Mature

That doesn’t destroy the movie, but it’s in no way tolerable for so much money. Unfortunately, I already know the whole thing pretty well from the gaming industry. For example, I was looking forward to Cyberpunk 2077, but when I started the game in December 2020, it wasn’t finished at all, and the developer cashed it in right away.

The same applies to Battlefield 2042 or the GTA Remastered trilogy. Incidentally, we are not the only editorial team reporting on these problems. While all US colleagues are delighted with the Snapdragon, colleagues from Germany, Ireland and Great Britain report a lot of stuttering in the software.

However, reference is always made to the Ultra model and not to the Galaxy S22/+. So the performance still has to mature, but our loaner is being recalled this week, so we won’t be able to follow the further development of the software.

Connectivity

Meanwhile, connectivity is not a problem. LLAN 6E, 5G and Bluetooth 5.2 did not show any problems in all my tests (listening to music, video streaming, downloading software and connecting smartwatches and headphones).

Only the 5G mode has known problems such as reducing the battery life and heating the processor. That’s why I disabled it after a while. The new mobile communications standard currently has far too many disadvantages than advantages.

Samsung Galaxy S22 Ultra: Sound

The Galaxy S22 Ultra has stereo speakers that ensure loud, balanced sound. YouTube videos, films and also music can be played well over it. The sound only seems a bit tinny at total volume.

However, Samsung is definitely in the upper segment in the smartphone sector. Only the Mi 11 Ultra from last year had an even better speaker.

Cameras

The cameras of the Samsung flagships have always been a highlight in recent years. Samsung continues this with the Galaxy S22 Ultra. A 108-megapixel sensor is used as the primary sensor, making progress in night shots compared to the S21 Ultra.

This is supported by a 12-megapixel camera for ultra-wide-angle photos and two other 12-megapixel sensors for zoom, with one lens equipped for 3x optical zoom and the other for 10x optical zoom.

The cameras take outstanding photos, and Samsung is now the market leader, especially in the zoom area. The primary camera takes high-contrast shots and emphasises colours a little more than before. But it is precisely because of the large lens that photos are incredibly sharp and have many details.

On the other hand, shadows in photos taken with the Pixel 6 Pro or Find X5 Pro are displayed better. So Samsung is not ahead here. The promised night mode also takes good photos with the primary camera, but it isn’t close to Google’s Pixel 6 Pro. Google works a little more precisely with reflections or glowing lamps.

In Terms Of Sharpness, Samsung Is Ahead

The photos are by no means terrible, just not the best on the market. Samsung is far ahead when it comes to zoom. While Google and Oppo can still keep up with the triple optical zoom but do not capture as many details as Samsung, the Koreans dominate at the tenfold zoom at the latest.

The photos are still very detailed and sharp. Slight image noise can only be seen in great detail. In this area, Samsung leads the list of the best. No other manufacturer creates such photos.

Samsung Galaxy S22 Ultra: Software

The positive thing about the software, which has not yet been well optimised, is that it still has a lot of time to be optimised. Samsung is the only manufacturer in the Android sector that promises a full four years of OS upgrades (i.e. up to Android 16) and five years of security updates.

Even Google, which has always been the manufacturer with the most upgrades, has to admit defeat. In the past, Samsung has also established itself as the fastest way to deliver updates far away from Google. The entire S and Z series is already on Android 12. The S22 Ultra is delivered directly with this software.

OneUI 4.1 has many functions, and there is good interaction, especially in the Galaxy ecosystem. Galaxy Buds headphones are recognized immediately, and recently the S22 Ultra can also be used as a “colour palette” for an app on the Galaxy Tab. But if you are not in this ecosystem, you will find a straightforward interface with many other functions.

Samsung Galaxy S22 Ultra: The S Pen

One of the main reasons for the purchase should be the perfect interaction of hardware and software. Although this does not apply to the processor, it applies to the S Pen built into the device. You can quickly write notes or start an app via air command with this.

The S Pen also doubles as a remote shutter release for the Photos app. The pen recognition, in particular, has been improved compared to the Note 20 Ultra. Since it is a question of milliseconds, it is difficult for me to verify that. The detection worked fine for me.

In general, the rule of thumb still applies: If you want a stylus phone, you will find it difficult to avoid the Galaxy Note/now Galaxy S. The only manufacturer worth mentioning is still Motorola with the Edge 30 Ultra, but the pen is not built into the device there.

Samsung Galaxy S22 Ultra: Battery

The Samsung Galaxy S22 Ultra has a 5000 mAh battery that can be charged with a maximum of 45 watts. However, the charging cable is no longer included (as with the S21 Ultra). An optional charger can be purchased from Samsung for 50 euros.

As the website GSMArena reports, the charger is more appearance than reality. Because the battery of the S22 Ultra should only be charged five minutes faster than with a 25-watt charger from Samsung, so you can confidently grab the cheaper charger and don’t have to pay a lot more, the device can also be charged via 15-watt wireless charging and even charge other devices with 5 watts on the back.

The battery life can be specified as one day when using the high refresh rate and the Full HD+ resolution. If you still want to go out on a Friday evening, you should charge beforehand so that you can still communicate at night. A Pixel 6 Pro is also in this battery range. The Oppo Find X5 Pro performs better in my test time so far.

Price And Availability

The Samsung Galaxy S22 Ultra comes in four colours: black, white, burgundy and green. There are also three other exclusive colours in the Samsung store. The device starts at 1249 euros with 128 gigabytes of internal memory. If you want more, you can pay up to one terabyte. The device has been commercially available since February 25, 2022.

ALSO READ: Cloud-First: 6 Tips For The Optimal Post-Pandemic Tech Stack

Cloud-First: 6 Tips For The Optimal Post-Pandemic Tech Stack

0
Cloud-First 6 Tips For The Optimal Post-Pandemic Tech Stack

Cloud-first is one of the most popular IT strategies. Companies should look out for when evaluating their existing tech stack to prepare, for the challenges of flexible working models with this approach.

From SMEs to corporations: More and more companies use the cloud-first approach for their IT infrastructure. A regular evaluation of the technology stack is crucial for the long-term success of cloud initiatives. At least once a year – more often if necessary – the current applications’ performance should also be checked whether they support the strategic goals and where additional functions are needed.

The pandemic has further increased the importance of evaluating the technology inventory. As a result, companies had to set up remote workplaces almost overnight. And thus ensuring location-independent access as well as the editing of content and processes and enabling contactless, digital interactions with customers. To achieve this, many companies have adopted various cloud-based technologies without further ado. This enabled business continuity to be maintained. Now is the time to review the solutions implemented for the “contingency” and strategically evaluate the tech stack. The following six best practices enable the implementation and evaluation of cloud technologies.

Cloud-First: Remove Duplicate Functions And Software Silos

Software sprawl means applications and solutions with overlapping functions or those not strategically integrated. A regular assessment of the technology portfolio enables the reduction to the applications that support the achievement of the strategic goals and deliver a tangible ROI. Integrating IT into strategic business decisions is helpful here. Because this allows the challenges of the departments to be understood and suitable solutions to be recommended. In some cases, the desired goal can be achieved with the existing technology. In others, a new, cloud-based application is required.

Easy Is Right: Pay Attention To Easy Maintenance And Configuration

The time-consuming maintenance of on-premises systems is a common reason solutions are outdated and run on versions that are sometimes no longer supported. As a result, companies lack functions and bug fixes – keyword bug fix – but there is also a higher risk of security incidents. Companies benefit from standardised maintenance and flexible options for configuration and integration with other systems by migrating to cloud-based applications. SaaS delivery models like Hyland’s are future-proof. They offer higher performance, security, availability, and maximum flexibility to react quickly to changing business conditions in the “New Normal” and be prepared for the challenges of the “Next to Normal.”

Here To Stay: Collaboration Tools From The Cloud

The adoption of cloud technologies such as Zoom, Microsoft Teams, and Office 365 skyrocketed as a large proportion of workers around the world moved to work from home, and companies had to organise remote collaboration. Cloud collaboration tools will remain an essential piece of the IT puzzle in the future, as many companies want to continue to give their teams flexible working options.

Secure User Experience With IAM From The Cloud

If not all employees are sitting in the office and are not in the protected company network, additional security precautions are required. Only authorised persons access content and systems – on-premises and from the cloud. Cloud solutions for identity and access management enable secure and straightforward authentication: single sign-on functions give teams secure access to the systems, applications, and content relevant to them with just a single login. This makes working from home more accessible and increases productivity. 

Cloud-First: Prioritising The Systems To Be Migrated

Some organisations operate hundreds of systems, and it’s not uncommon for many of these applications to be tightly coupled, sometimes in ways that aren’t cloud-friendly. Therefore, the IT department needs to prioritise and decide which systems to migrate first. The prioritisation is based on the essential functions required for day-to-day operations, the complexity of the migration, and the integration of the application with other downstream business systems.

Use Know-How For The Successful Implementation Of Cloud-First

When transitioning to a cloud-first strategy, those responsible must consider several things, from selecting the provider to integration and interoperability to project planning. Organisations can tackle the migration themselves or enlist the help of external consultants and cloud experts. In the latter case, it is essential to weigh the additional costs against the advantages of professional support, such as faster project completion, less risk, and fewer errors due to years of experience. Software providers such as Hyland, who offer a wide range of functions and applications and flexible integration options with their platform solutions, can also support the reassessment of the technology inventory and the planning and implementation of a cloud-first strategy.

ALSO READ: Robots Will Replace People, HR Managers Respond