Robust IoT Mesh Wireless Network For The Smart City

0
Robust IoT Mesh Wireless Network For The Smart City

A smart city cannot function without a robust data connection. The networks must be flexible and offer reliable bandwidth at all times. The Wi-SUN Alliance was founded in 2011 based on the IEEE 802.15.4g standard. For smarter cities.

According to the United Nations, 4.2 billion people live in cities – 55 percent of the world’s population. The influx of people from rural areas to the town was rapid: in 1950, only 30 percent of the people lived in urban areas. The rapid growth is because approximately 1.3 million people move from the countryside to the city every week.

Tokyo is one of the currently largest cities, with over 35 million inhabitants. A look at cities shows that growth is relatively moderate here: 5.3 million people live in the most significant urban region, the Ruhr area, and 3.4 million people in Berlin.

Smart City As An Answer To Growing Metropolises

Due to this influx of more and more people, the cities are under increasing pressure. Those responsible in the cities must provide the infrastructure such as water supply and waste disposal. But the increasing traffic in the large metropolises must also be controlled. This includes public transport.

Municipalities rely on technology to cope with the many facets of urban life. Smart cities use IoT -based (Internet of Things) systems and devices connected with radio systems. The enormous amounts of data from various sensors and other sources are the basis for city planners and administrations to effectively make better decisions and use scarce resources.

It should not be forgotten that the driving force behind the technology is the quality of life of city dwellers. A city’s problems are complex. However, the focus of all technical developments must be the human being. This includes the people who live in an area and others who commute there for work. In larger cities like London or New York, the number of incoming workers can massively exceed the number of residents. Tourists or visitors to events also benefit from smart city functions. The journey is safer and more pleasant.

Star Architecture With Disadvantages, The Mesh Network Is More Robust

Most smart city applications have traditionally used proprietary networks, typically designed for specific applications and offer limited security and upgrade flexibility. They often use a star architecture where the devices communicate with a single base station receiver. This makes the network vulnerable if the base station fails or is damaged.

On the other hand, Wi-SUN is a self-forming, self-healing mesh network with thousands of nodes that is not dependent on a base station. This improves reliability and fail-safety. The Wi-SUN FAN architecture can continuously reroute data across these thousands of nodes, giving devices full network connectivity and continuous service support even during extreme weather, cyberattacks, or power outages.

Innovative city applications place high demands on the underlying network infrastructure, which requires a high level of security, reliable connections, and resiliency in disruptions or changing conditions. Wi-SUN FAN offers a secure, cost-effective, and resilient data connection with minimal additional infrastructure – from rural areas to densely populated urban areas.

Wireless Networks Are Better Suppressed

Compared to conventional LPWANs (Low Power Wide Area Networks), Wi-SUN offers higher data rates of up to 2.4 MBit/s and lower latency (see table) with low power consumption, increased security, and interoperability. The technology is based on the IPv6 protocol and uses sub-GHz frequencies and the 2.4 GHz band, none licensed. Wi-SUN also enables mode switching techniques to adjust data rates according to the needs of each application.

Wi-SUN’s other technical advantages are better interference suppression through frequency hopping, and it bridges large distances. With multiple vendors involved in developing Wi-SUN products, smart cities can ensure they are not locked into a particular vendor and keep their options open for future developments. The open Wi-SUN standard encourages competition, keeps prices competitive, and ensures cities can rely on long-term continuity of service.

For getting started with Wi-SUN, the manufacturer Silicon Labs offers a Wi-SUN wireless starter kit (SLWSTK6007A) for frequencies from 868 to 915 MHz with the appropriate wireless boards (BRD4170A). It includes all the tools needed to develop Wi-SUN radio applications, including sensors and peripherals and a J-Link debugger.

So That The Smart City Is Safe From Cyber Attacks

The security benefits of Wi-SUN are another differentiator. Hospitals, government agencies, public transportation, businesses, and power grids become more vulnerable to cyberattacks. Their networks need to be secure. Several cities are now interconnected – so we need to ensure they all meet safety standards.

Wi-SUN has important built-in security features. One of the most important is that security authentication extends to the cloud provider, which is not the case with most other protocols. Wi-SUN also considers cybersecurity rules when designing a network to ensure reliable operations, including end-to-end security, encryption, key management, and network isolation in a security breach.

A key feature of Wi-SUN is native Public Key Infrastructure (PKI) integration, certificate-based mutual authentication, and proven data encryption and key exchange algorithms. Wi-SUN FAN access control is based on PKI modeled after the Wi-Fi security framework. Each Wi-SUN device also has a unique certificate signed by a certification authority at the place of manufacture.

London Street Lighting With Wi-SUN

Today’s smart city includes intelligent meters (smart meters), intelligent street lighting, public safety, traffic monitoring, noise detection, and pollution monitoring. Wi-SUN FAN offers flexibility in data rates and power consumption so that all applications can be efficiently supported. The technique is also suitable for integrating intelligent city sensors and distributed energy resources (DERs) into the power grid.

A recent example of a Wi-SUN application is a street lighting network in the City of London. It connects 12,000 lighting units that replaced the existing stock that has reached the end of its useful life. The new lighting reduces energy consumption and helps reduce maintenance costs. Over the next few years, the City of London plans to add more devices and sensors to the Wi-SUN network, including environmental monitoring. The narrow streets and tall buildings of central London present a wireless network challenge that Wi-SUN has successfully met. The Wi-SUN FAN network uses multiple gateways for redundant, reliable data connection and increases reliability due to its self-forming and self-healing function.

IoT Analytics estimates that the market for connected street lighting will top $3.6 billion by 2023 and grow at a compound annual rate (CAGR) of 21 percent. The world’s largest installation of connected streetlights is in Miami with almost 500,000 units, followed by Paris with 280,000. Both major projects are based on a Wi-SUN network to ensure the required data connection.

Human-Centered Philosophy

In the development of Wi-SUN, the focus was and is on people. The performance characteristics of this technology enable urban planners and developers to urban network infrastructure and make it future-proof. Wi-SUN enables utilities, municipalities, and enterprises to deploy long-range, low-power wireless mesh networks that can connect many thousands of IoT nodes.

As smart cities develop in the coming years, Wi-SUN will provide the reliable data connectivity needed to balance sustainability and quality of life and enable new applications that have not even been thought of before.

ALSO READ: Data Layer Management – Six Reasons For Outsourcing

Data Layer Management – Six Reasons For Outsourcing

0
Data Layer Management - Six Reasons For Outsourcing

Reliable operating environments are essential for processing large amounts of data on the data layer. The managed service provider, and open source specialist Instaclustr names six reasons that speak in favour of outsourcing.

Companies are often faced with whether they prefer to carry out data layer management in-house or outsource it. Heavily regulated industries such as financial service providers or healthcare often rely on on-premises solutions because they have to or want to host and manage all data internally. Managed platform models are a good solution because they can be used on-premises, in hybrid or multi-cloud environments, and the cloud. High security is guaranteed because control of the data layer always remains with the company and not the service provider.

Outsourcing Has Advantages

According to Instaclustr, however, the complete outsourcing of data layer management is often the best option for many companies. It offers several advantages, from simple administration, deployment, and configuration of clusters to expert support. Specifically, Instaclustr names the following benefits of data layer outsourcing:

High Reliability

Managed platform providers can almost always offer a higher level of reliability than is possible in-house. By combining management systems with integrated technology, mature processes, and deep human expertise, managed services can meet strict service level agreements (SLAs) for availability and latency.

Ease Of Use

A managed platform provides customers with a toolset for simplified management of the data layer clusters. You can easily set up and disable collections or change node types and sizes without having to change any code.

Low Costs

The management of data layer technologies requires human resources and expert know-how – combined with related personnel costs. Outsourcing can bring cost advantages to companies here. Above all, it must also be taken into account that an external provider automatically enters new patches or functional extensions into an environment, which significantly reduces the running costs for the administration of the data layer solutions.

Better Use Of Internal Resources

If a company doesn’t need to manage the data tier, it can use those resources for other projects. Outsourcing gives individual employees more time to concentrate on higher-value tasks and innovations in their core business areas.

Continuous Access To Expert Knowledge

With internal management, there is always the possibility that employees and experts will leave the company. This risk of a sudden lack of personnel in an important area does not exist when data layer management is outsourced.

High Scalability

Outsourcing data layer management offers several scalability benefits. First of all, they are concerned about the provision. When a company initiates a new project, a simple cluster can be up and running in minutes. Outsourcing also facilitates the dynamic scaling of the infrastructure depending on the amount of data. With proven processes, highly automated, or even self-service tools, a managed service can be much more easily scaled up as demand increases or decreases.

Consider Long-Term Needs

“Factors such as budget constraints, administrative effort, flexible customization options or the need for expertise play an important role in every IT decision,.” This also applies to the selection and management of data layer technologies. Companies must consider the requirements of the current architecture and the long-term architecture needs concerning industry trends.

“In order to achieve maximum flexibility at minimum costs, companies are increasingly relying on outsourcing, specifically on managed platforms that offer high flexibility, scalability and security. If companies do not want to go down the path of complete outsourcing, they can also use managed platforms on-premises and thus benefit from the same advantages,”.

ALSO READ: IT Disruptions: Companies Want To Improve Digital Employee Experience

IT Disruptions: Companies Want To Improve Digital Employee Experience

0
IT Disruptions Companies Want To Improve The Digital Employee Experience

As a new study by Nexthink shows, industrial companies want to take more extraordinary account of the digital employee experience in future. In addition, many do without new applications to avoid possible IT disruptions.

The relevance of digital employee experience has arrived in the mechanical and plant engineering and industrial production sectors, according to the new study “From technical performance to IT experience: Digital workplaces in strategic focus” by Nexthink. For 81 percent of the companies surveyed from the DACH region with industrial production and 70 percent from mechanical and plant engineering, IT disruptions and the resulting digital employee experience play an essential role. According to 76 percent of companies with industrial production and 66 percent of mechanical and plant engineering companies, they want to advance this with employees specifically responsible for this.

Resolve IT Disruptions Faster With Automation

Manufacturing companies, in particular, are planning comprehensive measures for trouble-free digital work environments. Eighty-seven percent of them (average 81 percent, mechanical and plant engineering 75 percent) will use systematic processes and tools with a high degree of automation to speed up troubleshooting in digital workplaces.

According to the study, 85 percent of manufacturing companies use metrics and key performance indicators to measure the quality of digital workplaces (average 76 percent, mechanical and plant engineering 72 percent). A similar picture emerges with the plan to use a central management platform for the IT helpdesk that covers everything from ticketing and user communication to reporting, analysis, and troubleshooting guidance.

Prevent IT Disruptions With Predictive Analytics

What are the causes of IT disruptions? For answers, 85 percent of the manufacturing industry and 68 percent of mechanical and plant engineering companies (an average of 75 percent) intend to use an integrated system that correlates data from the IT backend with the management platform of the service desk. Predictive analytics technologies such as big data, artificial intelligence, and machine learning are also prevalent in industrial production at 81 percent to foresee or prevent possible disruptions (average 76 percent, mechanical and plant engineering 75 percent).

The Majority Lives With Compromises In The Digital Workplace

The measures are a logical consequence of the fact that IT disruptions have been waited on passively instead of being proactive and avoiding them from the outset with a high level of automation. Most of the companies surveyed live with many compromises in the digital workplace that must be viewed critically (industrial production 70 percent, mechanical and plant engineering 62 percent). In addition, many faults occur again and again (industrial production 46 percent, mechanical and plant engineering 55 percent), and it usually takes a long time to rectify them, according to 57 percent of companies with industrial production and 58 percent of companies in mechanical and plant engineering.

In The Event Of IT Disruptions, The User’s Situation Is Made Unnecessarily Difficult

The situation is also unnecessarily complicated for users. They often worry about how long a disruption will last (industrial production 70 percent, mechanical and plant engineering 66 percent). The IT self-service portal often proves not very helpful (industrial production 48 percent, mechanical and plant engineering 62 percent). In addition, users are only insufficiently informed about significant or planned impairments in the digital workplace, explaining 85 percent of companies with industrial production and 77 percent of mechanical and plant engineering companies. From the user’s point of view, it is frustrating that although employee feedback is obtained, this usually does not lead to improvements (industrial production 54 percent, mechanical and plant engineering 70 percent).

Frequent Abandonment Of New Applications Due To Possible Problems

The existing deficiencies in the IT experience of employees are particularly evident concerning the introduction of new applications. Ninety-one percent of those surveyed from manufacturing companies believe that more or better assistance should be provided for new applications (mechanical and plant engineering 81 percent). To make matters worse, end-users avoid calling the ticket hotline when they have problems with new applications (industrial production 57 percent, mechanical and plant engineering 72 percent). A few new applications are introduced to avoid possible problems, as confirmed by 59 percent of those surveyed from industrial production and 75 percent from mechanical and plant engineering.

The Perspective Of The User Is Gaining In Importance

The results of the Nexthink study show that with the dynamic development of digitization, the topic of digital employee experience is also coming to the fore in companies. Eighty-one percent from industrial production and 83 percent from mechanical and plant engineering say that this topic will have high priority in their company. And above all, the manufacturing companies (80 percent) want to take the user’s perspective into account more – an aspect that is much less important in mechanical and plant engineering with 55 percent.

ALSO READ: E-Mental Health: Digital Innovations For Mental Health

E-Mental Health: Digital Innovations For Mental Health

0
E-Mental Health Digital Innovations For Mental Health

The mental health of employees is more critical than ever for companies today. Digital innovations can be the solution, especially in the corona pandemic and the shortage of specialists.

E-Mental Health In Practice: The past few months have shown impressively how negatively the social isolation caused by the pandemic impacts people’s psyches. According to a study, perceived loneliness has increased significantly due to the lockdown. Anyone who is already suffering from a mental illness is considered particularly at risk during these times. If social contacts such as those with work colleagues or customers are lost, this represents an additional burden on mental health. And the medical care of patients with mental illnesses has also deteriorated immensely. For example, numerous appointments had to be postponed or canceled due to the high risk of infection.

Prevent Burnout: This Is What Companies Can Do

It is all the more important that employers keep a close eye on the mental state of their employees and prevent mental illnesses. After all, employee satisfaction and motivation are essential for desirable work performance. Regular surveys offer the ideal opportunity to obtain direct feedback and assess the company’s general mood.

A survey conducted in 2018 indicates emotional stress and time pressure in the professional context as the tremendous stress, followed by too much overtime and a bad working atmosphere. According to this, every second person believes that burnout affects them. Six out of ten people even complain about symptoms typical of burnout syndrome. To prevent employees from burning out and to protect their mental health, companies should, among other things, pay attention to a healthy work-life balance. Flexible working time models, options for remote work, free sports offers for physical compensation, or other benefits have proven to be modern and practical measures for this. But these are not always enough.

E-Mental Health On The Rise?

Once mental disorders are present, they often lead to significant impairments on an individual and societal level. Professional performance is also severely restricted. Going to the therapist is always advisable at this point. But this step represents a considerable hurdle for many of those affected because there is an acute lack of therapy options in cities. The gap between the number of inhabitants and available psychotherapists is enormous, as the following analysis of a standard search platform for medical professionals shows.

E-mental health offers an excellent opportunity to make up for this deficiency. These are online-based treatment providers that can be used via computer, tablet, or smartphone and can thus counteract the current supply bottlenecks. What exactly is behind the term is revealed by a current definition by Nobis et al.

“E-mental health involves using digital technology and new media to provide screening, health promotion, prevention, treatment or relapse prevention. Measures to improve care (e.g., electronic patient files), professional training (e.g., using e-learning), and online research in the field of mental health is also part of e-mental health.”

E-Mental Health offers the possibility to carry out treatments digitally, which could previously only be offered personally on site. In addition to counteracting the shortage of therapists in cities, gaps in care in rural areas could also be closed.

Using Modern Media For Medicine: E-Mental Health

The need for digital support for mental problems is therefore enormous. The number of Google searches related to mental health is increasing every year. With higher demand, the offer in ​​e-mental health is also growing. The modern media provide additional possibilities. In addition to online therapies and advice and freely accessible information material on the Internet, there are now around 20,000 apps that deal with the topic of mental health. These can be essential support before, during, and after psychotherapy. For example, if companies provide free access to self-care apps, they can preventively counteract mental health problems among their employees. Mental health is becoming increasingly important in our society.

Digital Psychiatrists And Coaches For The Mental Health Of Employees

Especially in everyday work, people are regularly faced with new challenges that often have to be mastered under significant time pressure. A lack of community spirit in the team or too little praise and recognition from management also hurt mental health. But internal causes such as too little self-confidence, non-acceptance of one’s weaknesses, or excessive self-demanding can form the basis for developing a burnout syndrome or other mental disorders.

Therefore, enabling individual digital coaching or therapeutic advice is another way of using e-mental health in a professional context. With its help, workshops on conflict resolution, time management, and stress management in the workplace can be offered. For example, employees learn how to use various techniques to achieve their goals and company more efficiently and not panic even when faced with more considerable or unknown challenges. Building and strengthening your stress resilience are probably the most critical steps to prevent an impending burnout syndrome. In this way, the general atmosphere in the workplace can be improved, and the physical and mental condition, as well as the motivation and productivity of the employees, can be increased.

ALSO READ: Sustainable IT: IT Leaders Take A Close Look At Common IT Practices

Sustainable IT: IT Leaders Take A Close Look At Common IT Practices

0
Sustainable IT IT Leaders Take A Close Look At Common IT Practices

A new report by Nexthink has examined trends in IT sustainability, including hardware upgrades rather than replacements, improving start-up time and training employees to use IT in a greenway.

Nexthink has published the new report “Avoid E-Waste: Sustainable Workplace IT in Numbers”. Accordingly, bad IT habits can be discarded, and the status of end devices can be better checked. This can reduce environmental impact and contribute to a more sustainable future. Many of Nexthink’s customers, providers of solutions for Digital Employee Experience Management, have also set themselves the goal of introducing sustainable IT and reducing their ecological-digital CO2 footprint. They took the opportunity to reduce their CO2 emissions based on the data from the collaboration with Nexthink. They also want to reduce waste products throughout the organization.

This report focuses on the data collected in the first few weeks of implementing 3.5 million anonymized devices. It examined how IT leaders in different countries can reduce costs and their environmental footprint while improving the IT experience of employees. Three important practices for sustainable IT were examined.

Sustainable IT: Don’t Automatically Replace Every Device

The report draws attention to the widespread habit in companies of replacing hardware every few years, regardless of its condition. Nexthink’s research found that 20 per cent of these devices are still performing and don’t need to be replaced. And of the 80 per cent that exhibited low performance, only 2 per cent were beyond salvage – the remaining 98 per cent could be preserved for continued operation with a simple RAM upgrade or boot speed optimization. Companies that perform these small repairs save millions and contribute less to the global e-waste problem.

Testing And Improving Device Boot Times

Thirty-four per cent of the 3.5 million end devices anonymously checked for the Nexthink report took longer than five minutes to boot up. They cause more than 450 tons of CO2 emissions per year – the equivalent of around 190,000 litres of gasoline. This waste can be avoided by better assessing the health of employees’ devices, better understanding user habits, and taking a proactive approach to common IT issues.

Employee Training On Environmentally Friendly Computing Habits

A lack of understanding of employees’ digital habits leads to higher emissions and reduced computer speeds. The research found that games, personal communications, and streaming apps combined to cause 33 tons of CO2 emissions per year on the 3.5 million devices examined. It would take 300 trees to absorb those emissions for a full year, to put that in perspective. Based on this report’s sample, IT leaders have the opportunity to reduce at least 695 kilograms of their company’s carbon emissions per week. To do this, they must educate their employees about better computing habits and eliminate high-emission applications.

Sustainable IT: Understanding The Way Employees Work

“Creating a more sustainable work environment is a top priority for companies today. But while many CSR initiatives focus on reducing single-use plastic and eliminating paper waste, they overlook the high emissions that their IT hardware and digital activities produce every day. IT leaders have a responsibility to understand better the environmental impact of their employees’ digital footprint. To proactively address digital issues that contribute to environmental pollution,” said Yassine Zaied, Chief Strategy Officer at Nexthink.

“Simple actions such as ensuring software is up to date, turning off laptops when not in use, and removing unnecessary applications can reduce emissions and save businesses money. However, most companies would love to make these changes. Many find it difficult to do this efficiently without a concrete approach. CSR improvements are possible when companies can understand and respond appropriately to how their people work and the challenges they face.”

For more on how IT impacts the environment and what IT leaders can do to make IT sustainable, please read the report Avoiding E-Waste: Sustainable Workplace IT in Numbers.

Nexthink is a software provider for the management of digital employee experiences. Nexthink offers IT leaders insight into the IT experience of employees at the device level to actively shape future-oriented work environments. This enables IT teams, to move from reactive problem solving to proactive IT service.

ALSO READ: Expense Management: Why It Is So Vital For The Digitization Of SMEs

Expense Management: Why It Is So Vital For The Digitization Of SMEs

0
Expense Management Why It Is So Vital For The Digitization Of SMEs

What does the digitization of financial documents bring? And why shouldn’t you continue to file your invoices in paper form?  This statement may be understandable and visible for individual sectors such as green energy, but the debates about digitization deficits seem to contradict this statement.

In addition to the slowly progressing broadband expansion, the lack of mobile Internet access, and other brakes on digitization, it is not possible to pin all the deficits on politics alone – you also have to look at the companies. There is a need to catch up on digitization in ​​expense management, among other things.

The Little Ones Stay Small, And The Big Ones Remain Big

Managers of small and medium-sized companies are often faced with whether they want to simplify processes with the use of technology. The excuse most often given is: “Never change a running system.” Why should it be when it has done such an excellent job so far? With precisely this mental evasion, the SMEs ensure that only the large companies, which can only use dedicated staff for restructuring, integrate new technologies into their everyday work.

On the other hand, SMEs tend to fall by the wayside when it comes to technological progress. Of course, every beginning is complex, and directly restructuring an entire department may cause decision-makers concern. So why not start with an area that sometimes causes the most extraordinary effort in companies and at the same time uses the most paper? We’re talking about expense management.

Expense Management Facilitates The Handling Of Expenses And Expenses

Expense management usually causes a lot of bureaucracy. In addition to invoices, small receipts, and new legal requirements, handling expenses means a lot of paperwork. Finally, all evidence must be categorized, filed, and kept for a few years. This circumstance can quickly ensure that entire rooms are only used for archiving in medium-sized companies. Here, in particular, there is great potential to make work more accessible and protect the environment.

The digitization of all receipts for the company’s expenses and expenses ensures that they remain securely stored in digital form. In contrast to conventional archiving in folders, this requires at most a USB stick – users of digital expense management with a cloud connection, however, no longer even need this to preserve their receipts permanently and follow guidelines.

The fear of data leaks, spies, or hackers, which many decision-makers have, is also unfounded in expense management. Today’s solutions are secured several times and encrypted to a high degree. In most cases, technology providers within the EU also comply with GDPR standards. But back to the crucial question: What does the willingness to digitize SMEs bring at the national level?

Expense Management: Immediate Impact On Digitization

Participation and, above all, the willingness to adopt new technologies drive the development of technology providers. This means that technology providers from other EU countries recognize a new market and want to open it up. And secondly, that start-ups and providers of technology solutions are also founded. As a result, the pressure on digitization increases and strengthens the national economy in the long term. If more small and medium-sized companies are dependent on fast Internet and reliable connections, this will also accelerate broadband expansion. In this way, SMEs not only have their digitization in their hands but also ensure holistic progress.

It is not uncommon for the media to say that missing out on digitization. Through my job in many countries, I have gained insight into how quickly – or rather slowly – digitization is progressing in the finance departments of companies of all sizes, among other things. Even if it would seem reasonable to assume that companies in other countries are far ahead in this regard, my insight is that never changing a running system must serve as an argument in many places.

ALSO READ: Big Data, NoSQL Databases Make Their Grand Entrance

Big Data, NoSQL Databases Make Their Grand Entrance

0
Big Data, NoSQL Databases Make Their Grand Entrance

Companies are faced with storing, processing, and analyzing vast amounts of data. In contrast to relational databases, which are struggling with the new data world, NoSQL databases show their advantages in the big data age.

The rise of the IoT and other connected data sources has led to a tremendous increase in data companies collect, manage and analyze. Big data promises big insights for companies of all sizes and in all industries – it’s not about how much data a company has but what it makes of it. Used correctly, it can reduce costs, develop new products and optimized offerings, and make smarter business decisions.

However, the volume of data quickly reaching the terabyte range is no longer necessarily clearly structured but also unstructured as e-mails, documents, photos, or videos. The classic relational database management systems ( RDBMS ) with fixed structures can also cope with this wild mix, which cannot or not so easily be brought into table form, but only via detours and workarounds. However, these detours increase costs while performance decreases at the same time.

NoSQL Solutions Are More Flexible

NoSQL databases emerged as a reaction to the weaknesses of relational database management systems: At some point, the classic systems were too slow, not sufficiently scalable, and not agile enough because they could not work in a distributed manner. Apart from vertical scaling (scale-up), most RDBMS only handle rudimentary forms of horizontal scaling (scale-out). In this case, failover clustering is based on shared storage, while always-on availability groups are limited to replication. If the data volume grows, administrators have to install a larger system and, with increasing user numbers, a more powerful server. Otherwise, such a system not only becomes a bottleneck but a single point of failure.

In contrast, NoSQL solutions are more flexible in data aggregation because they work with objects rather than fixed tables, and they can be processed using an object-oriented format. JSON document – JSON is an acronym for JavaScript Object Notation – can contain different, changing data types, and the length may vary depending on the available data. Also, NoSQL databases typically scale-out, spreading data across additional inexpensive servers. Also, more users are split across more servers to keep latency low. Benchmarks show that Couchbase constantly maintains an increase in performance over any number of nodes.

To ensure high availability, Couchbase also supports unidirectional and bidirectional replication between geographically separated data centers with a standard dedicated Cross Data Center Replication (XDCR) function. On the other hand, many RDBMS require additional software for replication, which means higher license costs. Modern NoSQL data managers also keep a large part of the aggregated data in fast random access memory (RAM), speeding up analysis.

Ecommerce And NoSQL Databases

E-commerce is a good example of using NoSQL databases. Online shops keep attracting customers with special offers such as Black Friday or Cyber ​​Monday. During this time, the number of users and thus the amount of data are exploding, so high scalability is a basic requirement to handle this workload. This is where NoSQL databases show their advantages. They are finely scalable via nodes that are organized in clusters. This allows limitless flexibility since you can easily increase the cluster and reduce it after the peak.

The Classic JOIN Eats Up A Lot Of Time

The SQL operator JOIN can become a real brake on performance in complex analyses. It is necessary in classic RDBMS because, for more extensive evaluations, several relational tables usually have to be connected via key indices – only in rare cases is all the data to be evaluated located in a single table. On the other hand, Document databases can store the data from multiple relational tables in a single JSON object.

JSON documents allow nesting and can therefore also map complex structures very well. The advantage: Only a single read operation is necessary, and the analysis algorithm has access to all the data it needs. There is also no incompatibility (impedance mismatch) between application objects and JSON documents. The NoSQL database Couchbase Server stores JSON documents in “buckets,” Organizations can deploy them across multiple server nodes.

Also, developers and business analysts don’t have to learn new technology. With N1QL, users can continue to use the familiar syntax and semantics of the query language SQL to carry out search queries via JSON documents, create new databases and documents, or maintain existing documents. There are also features such as a full-text search or ad-hoc analytics. Modern databases should also be able to run either on-premises or in the cloud. Organizations today are combining the cloud services that best meet their unique needs and don’t want to be held back by proprietary hurdles.

Even with big data becoming more important, not all companies will migrate all legacy RDBMS to a modern NoSQL database in one step. This is unnecessary because NoSQL and RDBMS form a strong team: A NoSQL database, for example, acts as a high-performance cache server and takes on the new modern data types, while an RDBMS handles the classic, transactional database business. Customers thus benefit from the best of both worlds.

ALSO READ: The Graph Databases Are In Comparison

The Graph Databases Are In Comparison

0
The Graph Databases Are In Comparison

Graph Databases – The graph database market is thriving and growing as demand for connected data analysis increases rapidly. But the IT user wonders which graph database is the most powerful and best suited for him with its functions.

This brief overview examines common graph databases, which can now be run in the cloud: Neo4J, Amazon Neptune and Apache TinkerPop. A newcomer to the selection is TigerGraph.

Apache TinkerPop

Apache TinkerPop is an open-source project. The release of the first version of the graph traversal engine took place in 2011. In 2015, it found its way into the Apache incubator. Because it integrates easily, has flexible storage options, and has a Permissive Use licence, TinkerPop has become the preferred NoSQL vendor when looking to add a graphical interface to their products.

According to the independent publication DB-Engines.com, Neo4J is already responsible for about half of the entire graph market. Products based on TinkerPop account for about 40 per cent of the total market. The remaining 10 per cent is distributed among more than 20 different providers, including well-known ones like Apache Spark, young ones like Amazon Neptune and proprietary ones like SAP HANA.

Neo4J

Neo4J has a native graph platform. “Native means that we developed the graph database from scratch for the graph data model,”. He attended his customer event in Berlin, explaining the difference between non-native graph databases. “There, the graph data model is implemented via an adapter layer, placed on the underlying data model, whether relational, JSON-based or key-value-based.”

In contrast, the Neo4j database is from a single source because the manufacturer owns the entire technology stack and can optimize all components for a specific purpose, i.e. also for particular workloads. He also holds the indices that he or the customer can optimize for particular purposes. The Neo4j platform is available in the public cloud under ” Aura “.

Cypher is now the default query language for graphs, for example, on Apache Spark. “Cypher is intended to replace Spark’s graph tool (see below) over time, with the Cypher implementation for Spark being called CAPS (Cypher for Apache Spark),”.

Amazon Neptune

Amazon Neptune is a fast, resilient, fully-managed graph database service for building applications that work with highly connected datasets. The core is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latency. “The database is optimized for OLTP requests, for many parallel requests with a concise latency period,”.

Amazon Neptune supports the most popular W3C Property Graph and RDF data models and associated Apache TinkerPop Gremlin and SPARQL query languages, allowing users to use them like open APIs to build queries that efficiently navigate highly linked datasets.

Amazon Neptune is optimized to work with graph data in memory but doesn’t limit the database to memory size. The size of an Amazon Neptune database is always limited to 64 terabytes, regardless of the size of the main memory. The user must ensure that he always has his data in the working set, i.e. in what is read a lot. For this, Neptune optimizes the read accesses and the read queries. Therefore, the user has to scale up or down as needed. “We differentiate between writing and reading access,”. The various working sets could be designed differently depending on the request loads.

“Customers want reliability and data security, performance and reasonable costs per workload,”. “They also want high availability. Enterprise customers want to support and compliance with SLAs.”

Scalability And Performance

At the same time, customers demand the scalability of the graph database, for example, if a larger database model is to be implemented, such as at Facebook. “There are then also many connections between the entities and concepts.”

There are different ways to ensure sufficient scalability. “You could create a fairly long table in the relational model, for example, for customers and their contacts,”. But with a network of relationships, users quickly reach their limits with a relational model. Therefore, the use of a graph database makes sense.

“The high availability of Neptune is primarily guaranteed with reading replicas,”. These can be placed in different Availability Zones (AZs) and automatically synchronize their data from the Amazon Neptune cluster volume. “Read access for queries” “is done via these replicas, optimal for OLTP, so the latency is in the tens of milliseconds.”

There is one controller instance, and up to 15 read replicas. Write access is limited to the controller instance of the entire database, but read access can be significantly scaled: “The user can distribute this access over a maximum of 15 replicas of the entire database.” The controller instance is also for serializing the transactions responsible for which Neptune was designed. Many working sets would be held in main memory to increase throughput and reduce latency.

Data is distributed to the replicas via Amazon Neptune’s Virtual Storage Layer (VSL), the cluster volume. This logical layer is based on a storage cluster that Neptune manages. The VSL keeps a transaction log distributed across the Availability Zones (AZs), which increases data security in addition to IAM and critical management. The high availability and durability of the data are thus guaranteed for demanding customers.

With a distribution of replicas across multiple AZs, high availability can be maximized for failover. The size of the counterparts depends on the workload, but this size can be well defined statically. The workload is crucial, such as how many applications can or may access.

Performance can be increased along with scalability. “can be moved to larger EC2 instances to give them more performance: more resources, more simultaneous connections, can store more data in the cache.” Load balancing can be implemented for all replicas. “Amazon Cloudwatch, which monitors the database, provides suitable metrics, such as CPU or main memory utilization.”

TigerGraph

In March 2020, version 3.0 of the TigerGraph graph database was released. As TigerGraph Cloud, it is also available as a graph database-as-a-service. The strengths of this version, which has a visual query builder, are linear scalability and, above all, analysis. With “analysis at the click of a mouse”, the user should be able to draw relevant conclusions from complex data relationships simply by moving the nodes and edges in a diagram and specifying the analysis levels.

TigerGraph is particularly proud of its ability to examine and display up to ten levels of relationships. This requires a high degree of scalability in a suitably equipped cloud instance. In a demonstration, TigerGraph ran on AWS. For this scalability to grow linearly, appropriate cluster management and massively parallel processing are required, two performance features that TigerGraph values.

The data is stored and secured in the cluster. Custom indexing is intended to enable users to improve database performance for specific questions – analogous to the working set in Neptune. “Similar to the index at the end of a textbook, a user-defined or secondary index in a database also contains references that allow the user to access the data they need at the moment directly”.

ALSO READ: Data In The Cloud Is Different – ​​Analyzes Should Be Too

Data In The Cloud Is Different – ​​Analyzes Should Be Too

0
Data In The Cloud Is Different – __Analyzes Should Be Too

As organizations move to the cloud, they should ensure their analytics solution lives up to cloud realities. This is the only way to exploit the potential of the cloud entirely. The debate over whether or not companies are moving to the cloud is over. No matter what analyst reports you to look at, they all indicate that data growth in the cloud is far outpacing on-premises growth.

The pandemic accelerated this trend, as companies are phasing out legacy on-premises technologies in record time. As if that wasn’t proof enough, Snowflake’s impressive IPO shows just how powerful the cloud has become — for both customers and Wall Street.

Businesses go to the cloud for many reasons. They want more flexibility, agile, deliver innovative services faster, improve customer experience and increase profits. And what is at the heart of all this effort? The data.

But data in the cloud is different from data on-premises. As cloud adoption accelerates, organizations need to understand the differences between the two systems. Only in this way can they exploit the potential of the cloud and avoid costly mistakes. Here are three main differences:

Larger Amounts Of Data

Over the past decade, the amount of data we generate has exploded. This growth is only accelerating with cellular, the Internet of Things, and the ever-growing number of SaaS applications. IDC projects that by 2025 we will have around 175 zettabytes of data. That’s right – zettabytes, which is 10 21 bytes.

Cheap, efficient storage in the cloud has made this growth in data volumes possible. But that also brings with it entirely new problems. Businesses are drowning in their data. The technologies that have historically been used to process all this information can hardly handle this scale. However, even more, problematic is that no human can quickly sift through all of this data to uncover meaningful insights. Given the amount of data, this is not feasible. We leapt from a needle in a haystack to looking for a hand in a wheat field.

The Half-Life Of The Data Is Shorter

Data in the cloud is not only larger by many dimensions than on-premises. They also lose value faster. Data is generated from many different digital sources and stored in the cloud. This data is updated as new interactions take place. Data that was brand new in the morning is already out of date by the end of the day.

Let’s take the example of a company website. Anyone conducting a significant product launch wants to know what has happened in the last hour. To take full advantage of website traffic and get the most out of the launch, the company needs to react quickly and possibly reallocate and adjust resources. The data from the previous day is useless, while the current data is priceless.

Data Governance Is More Difficult

One of the biggest obstacles for companies has been the fragmentation of their data. Company data was distributed everywhere and used differently by different departments and teams. That was already a major hurdle in managing and backing up the data.

In the cloud world, this challenge is even more significant. Businesses have thousands of different applications, each generating data. This data can be stored in SaaS applications, data lakes, public clouds, private clouds, or even across multiple clouds.

Managing this data requires ensuring that each person has access to the correct data. Most companies are lagging miserably here. Technologies not designed to collect data at the most granular level make it almost impossible for them to get a handle on their data.

New Analytical Skills Are Required

Data in the cloud requires a new kind of analysis. Without a fundamentally new architecture, there is no point in forcibly moving the solutions developed for designing dashboards on desktops to the cloud.

Businesses looking to get more from their cloud data should look for analytics platforms that have three capabilities:

  • Search for quick data scrutiny: With so much data in the cloud and so much change happening so quickly, analytics solutions must provide an easy and fast way to access data at the most granular level for everyone within the organization. If business users wait days for a report or dashboard to be generated, they will make decisions without data. Business users can quickly search through data and get the insights they need to make decisions with a search function. In addition, the search function should be able to drill down into detail. It doesn’t provide the valuable nuanced insights needed for decision-making when aggregated or averaged data.
  • AI to uncover hidden insights: Companies need analytics solutions that make it possible to use all the data in the cloud in a meaningful way. Otherwise, employees will quickly be overwhelmed by the sheer volume of data. Technologies that integrate AI and machine learning automatically show employees what’s important, new, and changed. They ensure that essential insights do not remain buried in the mountains of data in the cloud.
  • Fine-grained security for cloud-level governance: With the growth of cloud data, security and government have become tenfold more critical. The only way to counter this growth is to use analytics platforms whose governance model is as scalable as the data in the cloud. Anything built with a desktop architecture can’t handle data growth, slowing down business and introducing severe risks.

Data in the cloud is, without question, the future. But to unlock its value and make meaningful use of the data, companies need new technologies that can cope with the new requirements. If companies try to retrofit an old BI solution for a modern cloud data warehouse, they will probably not see the expected results. They will not be able to exploit the potential of the cloud entirely.

ALSO READ: Database-As-A-Service – Use Graph Technology In The Cloud

Database-As-A-Service – Use Graph Technology In The Cloud

0
Database-As-A-Service - Use Graph Technology In The Cloud

Software-as-a-Service (SaaS) and elastic computing capacities had arrived in the cloud long before databases were just getting started. Reasons for this included infrastructure requirements of database technologies, compliance concerns, and data gravity. In recent years, however, Database-as-a-Service (DBaaS) has caught up. Now included: graph databases.

The use of graph technology – connected to the cloud – is considered one of the most important trends for 2021. According to Gartner, 30 percent of companies worldwide will use graph databases in the next two to three years to quickly access the right data context for decision-making.

The Perfect Mix: Graphs, Data Science & Cloud

Graph analytics provides data scientists insight into the relationships between different entities, such as organizations, people, and transactions. The aim is to recognize and check patterns in data that remain undetected with conventional analyses (e.g., relational databases).

The graphs are boosted by combining them with machine learning functionalities and graph algorithms, with which data sources and documents can be searched indefinitely and in real-time. The areas of application are wide. In medicine, graph data science is used, e.g., B., to research new treatment options for diabetes. Manufacturers, in turn, use graph technology for supply chain management ( SCM ) and product data management (PDM) to identify causes of errors during quality controls. Banks and insurance companies use graphs to fight money laundering and tax evasion fraud.

The transition to the cloud is now the next logical step for graph databases to create more freedom and agility for developers and graph applications. The result is Graph Database-as-a-Service (DBaaS). The move to the cloud is urgently needed to react flexibly and drive innovation even in uncertain times.

Ease Of Use And Flexibility Of DBaaS

There are two reasons for the trend towards Database-as-a-Service (DBaaS): ease of use and flexibility. For one, developers can focus on programming applications without managing the infrastructure. On the other hand, the transition to cloud service shortens the time to value and enables applications to be delivered much faster than on-premises. Another advantage is the significantly faster development of your apps and reduced costs. For example, the back end automatically grows with the requirements when writing a program. At the same time, only what is used is billed in the cloud.

The trend towards GDBaaS is becoming more and more established: The graph database provider Neo4j reported that around 90 percent of customers were running their graph-based applications in the cloud in 2020. For many companies, it is also the first time that they have ever used graph technology. In January, Neo4j introduced the enterprise version of Aura, which gives users a dedicated Virtual Private Cloud (VPC). Their data and infrastructure are isolated from another enterprise/graph users. The Neo4j GDBaaS is available on the Google Cloud Platform and is an early access program on Amazon Web Services (AWS). Aura’s self-monitoring and self-healing architecture is based on two fundamental technologies: Kubernetes and causal clustering.

Kubernetes Container Orchestration

As a standard container orchestration system, Kubernetes offers a reliable and efficient way to manage the server, network, and storage infrastructure. The processes are distributed to the existing servers so that the workloads and services always have the necessary resources to run them. Faulty processes are automatically restarted, which increases availability and reduces downtime.

The container-centric management environment is the basic requirement for efficient IaaS, PaaS, and DBaaS. However, the consistent distribution of systems such as graph databases presents particularly complex needs. It must be ensured how tens of thousands of databases can be orchestrated simultaneously, automated, and reliably. Algorithms alone are not enough here. Neo4j Aura used the modular building blocks of Kubernetes to develop a custom Kubernetes operator. Thus, the GDBaaS can ensure its process for updates (rolling updates) and utilize the functionality provided by Kubernetes.

High Availability Through Causal Clustering

To ensure the highly available architecture of the graph database, the raft consensus algorithm was implemented. Raft is a consensus algorithm developed as an alternative to the Paxos algorithm family. It provides the basis for distributed transactions. In the graph database, Raft allows a cluster of servers to work together. Developers can thus direct or redirect transactions either locally or to other members of the collection and flexibly control the load distribution in the cluster. This guarantees high scalability and high availability. Updates, security patches, and on-demand scaling of the database can be performed without downtime.

With Neo4j Aura Enterprise, each database is operated in a causal cluster of three servers distributed across different data centers. If one of these servers or even the data center fails, the data remains protected, and the application continues to run.

infrastructure And Security

However, a Graph-Database-as-a-Service does not only have to be able to rely on its cloud instances. The infrastructure in the cloud also includes network components and storage devices. Wherever possible, GDBaaS should therefore fall back on the existing, scalable services of the public cloud providers. In this way, the databases have all the properties of the underlying system in terms of security and resilience. End-to-end encryption and isolation within dedicated virtual networks provided in the cloud provider’s systems ensure additional data security.

The cloud means more flexibility, security, reliability, and lower costs for most companies. Databases are no exception. Once the requirements for storing, querying, and managing data in the cloud are met, nothing stands in the way of the “perfect mix” of graphs, data science, and cloud.

ALSO READ: Big Data Projects With Stackable In The Ionos Cloud