Wednesday, April 17, 2024

Fujifilm Recording Media promotes Kangaroo

Fujifilm Recording Media, the tape manufacturer giant and industrial reference, joined The IT Press Tour last week in Rome. It was the opportunity for the vendor's team to detail Kangaroo, its data archiving solution, integrated in a pretty unique mobile unit. It came from the obvious fact that hybrid configurations are real and really liked by end-users able to mix both of best worlds.

It still is the case that cold data, typically backup and archived data, normally stored on secondary storage, are not systematically maintained on cold storage. This situation has a significant impact on the TCO of this protection environment. At the same time, volumes are accumulated especially when you consider archiving data with long retention period. And it appears obviously that one of the best media for that is tape being a passive media by nature. At the same time, the LTO roadmap was disappointed with capacity delivered not aligned with what was presented and expected. A LTO9 offers today 18TB raw and 45TB compressed with a optimistic 2.5:1 ratio. But he capacity advantage the tape had in the past disappeared, recent HDDs reach 28TB and even SSDs pass the 60TB barrier, of course cost is not aligned but it is worth mentioning that point.


Let's focus and detail Kangaroo. It starts with the idea that data archiving requires a cold storage approach able to deliver a very low TCO but also resiliency, scalability, efficiency and easy integration. And don't forget the ransomware protection with a natural air gap. Fujifilm team designs a very integrated solution coupling in-house software with partners hardware like servers, disk storage, IBM tape drives and BDT tape library or autoloader. The minimum capacity is 1PB so clearly it targets enterprises or SME who generate lots of data with the requirement to archive a significant volume. But it goes fast in that domain.

The software is Object Archive, a data management layer developed by Fujifilm, that connects data sources to secondary storage targets instantiated with tape libraries, remote sites and cloud instances via the S3 API. The service is exposed via multiple access methods - NFS, SMB and S3 - and could be controlled by any archiving, potentially backup, software in front of the Kangaroo device itself. This flexibility in terms of access methods both for ingest and offline writes offer a wide variety to use cases. Also the team has insisted of the Open Tape Format based on POSIX TAR that helps to read tapes without the software and be independent when tape are transferred across sites but also in the future.


Wishing to penetrate the market faster and as the solution could be a good fit for smaller companies, Fujifilm plans to introduce a Kangaroo lite model with capacity starting at 100TB.

Share:

Monday, April 15, 2024

Know & Decide for a comprehensive IT asset management approach

Known & Decide, founded in 2015 by Emmanuel Moreau, is a french company dedicated to IT asset management for enterprises. The annual turnover approaches $2 million. The idea came from the deep relationship with CIO the CEO cultivates for a few decades. All these companies have strong difficulties to track, control, manage and monitor their IT resources of all kind.

The Know & Decide solution is a pure software solution composed by 3 modules: the discover, the management and the reporting one.

The data discover module collects information thanks to more than 80 collectors via API and also a file import feature for all offline documents related to IT resources like financial or contractual files. The module also connects to ERPs and other similar reference catalog to feed the CMDB.


On the data management aspect, the solution embeds some correlation functions to identify clearly all associated information and avoid duplicates and reach a strong consistency level. The reporting also provides a global and central view of all IT discovered assets serving as a the reference or base of truth for the IT environment.

The product is deployed on-premises with collectors servers, central database and reporting/UI ones. It is designed to gather all info without any local agent that is very difficult to manage and maintain at the same level. Very configurable, the product also is a no-code solution.

The philosophy is to align the physical reality and the reporting with all contracts and reduced significantly the divergence between these 2 areas with a final goal to converge to the truth.

Five uses cases have been developed to illustrate how the solution can be applied and used. Use case #1 is about a global vision of all IT assets, use case #2 touches the quality of the CMDB, use case #3 is related to the the production plan, use case #4 covers the security plan and use case #5 details budget to streamline the cost model.

As of today, the product is deployed at very large customer sites in France, Belgium and Luxembourg and the firm is looking for new partners that can represent and penetrate some verticals markets. The pricing model used is a subscription plan for 3 years and is based on the annual cost of CIs and varies form 1 to 10€ per year.

This solution also participates to the green and more globally the ESG challenge for enterprises.

Emmanuel Moreau told us that the next key features added to the solution will be AI to provide Natural Language Processing and easy access to all reports. We'll follow this story in the coming months as it is an interesting approach to a real challenge for companies especially for large ones.
Share:

Friday, April 12, 2024

CTERA unveils Vault 2

CTERA, the leader in Distributed File Storage for the enterprise, participated to the 55th edition of The IT Press Tour 2 days ago Rome. It was the opportunity to get an update on the company and product strategy with Oded Nagel, CEO, Aron Brand, CTO and Saimon Mickelson, VP Alliances.

2023 was an incredible year with new products, a new partner program, Hitachi Vantara partnership that is very active, 2x new business and 30% growth in ARR plus some receiving top rankings from analysts like Coldago Research and other bloggers comments.


The product continues to evolve essentially around data services even if it's already very comprehensive. It is more and more adopted as a central modern data platform for distributed enterprises to support various workloads, user groups and associated applications across a wide variety of industries and vertical segments. It is perfectly illustrated by a new series of successes in multiple domains beating NetApp and a few other classic NAS vendors unable to deliver this level of services.


Among recent features and services, CTERA delivered Migrate, Ransomware protection, Anti-Virus and WORM Vault plus the S3/NAS common content access and Analytics. To refresh our readers, Vault offers multiple modes to protect data at the cloud folder level: WORM, WORM + Retention in 2 flavors: Enterprise and Compliance mode.

Beyond these, CTERA executive team present in Rome reserved a good surprise with a live announcement about Vault 2. It brings 3 things: legal hold, object lock and chain of custody. Legal hold offers official capability for legal entities. Object lock operates at the file level and thus is more granular than the previous iteration. The last feature is a collaboration with Hitachi Vantara, a key partner for CTERA, to provide the full report related to data migration. Vault 2 will be available in Q2.


Announced during our last visit at the CTERA Tel-Aviv HQ in 2023, the ransom protection is a must for all enterprises and the company's flavor leverages AI to deliver a fast and reliable detection and remediation. We expect that second news next week as we've got briefed during the same session, I will cover this in a few days to respect the embargo but again CTERA.
Share:

Thursday, April 11, 2024

QStar, a confidential leader with Archive Manager

Specialist in unstructured data archiving, QStar Technologies, founded in 1987, still is very confidential even if the company secured more than 19,000 installations since its inception. This level of adoption is unique in the domain confirming that the firm is profitable operating at its own pace.

QStar CEO, Riccardo Finotti, CTO, Max Finotti, and SVP Sales and Marketing, Dave Thomson, joined yesterday The IT Press Tour in Rome for a company and product update. They even launched a product I'll cover in a few days.


The team has developed a comprehensive product line with Archive Manager, the heart of the solutions set, Network Migrator, Archive Replicator and Object Storage Manager.

Archive Manager (AM) operates as a the archiver controller for data managing storage units dedicated to this behavior and the lifecycle of submitted files. Exposing multiple access methods like NFS, SMB or S3, the product could be seen as a gateway in front of tape libraries or cloud. Implementing a S3-to-Tape model, AM supports also any backup tool via S3 or file sharing protocols such as Cohesity, HYCU or Rubrik. AM introduced the notion of disk cache with a special file system designed by the company and also a special format on tapes. The AM server must have 32GB of RAM minimum for each volume "attached" with the destination storage units. A minimum of 1TB of space is needed for that cache and it appears to be pretty small especially today with 24TB HDD or even larger. Same remark as this size must also consider the largest file size submitted to AM and of course disk array with RAID, erasure coding with today NVMe could be the right choice. As the performance of the cache is key for the global quality of service, considering flash for the cache makes really sense.


Archive Replicator is in fact the same product with the capability to remote copy data sets synchronously to 4 other storage units. Network Migrator is a HSM working with agent, API or pull mode and replacing migrated files by stubs.


But it appears that for larger configurations, a single node for archive is not enough inviting users to multiple this singe node configuration. We also anticipate some cluster mode for Archive Manager able to aggregate all nodes performance and deliver this needed performance boost to address large deployments with high volumes of data and huge number of files. More news soon.

Share:

Tuesday, April 02, 2024

In a few days, the 55th edition of The IT Press Tour will take place in Rome

The countdown is set, The IT Press Tour will land soon in Roma, Italy, for its 55th edition. This tour will be dedicated to IT infrastructure, cloud, networking, data management and storage with 6 hot and innovative companies coming from various horizons:

  • CTERA, the obvious leader in global file services,
  • Fujifilm, the reference in tape manufacturing with new data management solutions,
  • Know and Decide, a recent player in IT asset management,
  • Leil Storage, a young company dedicated to new generation of MAID solutions,
  • QStar Technologies, a pioneer in data management and archiving software,
  • and Quickwit, a fast growing actor in log management with a powerful indexing technology.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.

Share:

Thursday, March 07, 2024

Quantum introduced a full flash ActiveScale

Quantum, the champion of secondary storage, continues to maximize its ActiveScale acquisition. The software arrived in 2020 for just $2M from Western Digital even if HGST has spent $270M in a cash transaction in 2015 from Amplidata.

The new product iteration, the Z200, relies on 1U chassis running the new software able to be clustered with a minimum of 3 nodes and 460TB capacity. The solution is able to support hot, warm and cold data supporting Myriad but also tape libraries being a real core element in the data flow.

The recent case study from Amidata (with 2 more letters mp it will be Amplidata, funny right?) in Australia, illustrates perfectly some capabilities with data distribution over 3 sites for better resiliency and quality of services for remote users.

Share:

Wednesday, February 28, 2024

54th edition of The IT Press Tour in Colorado and California very soon now

In a few days, The IT Press Tour will operate its 54th edition, this one will take place in Colorado and California with an amazing program in perspective. This tour will be dedicated to IT infrastructure, cloud, networking, data management and storage with 10 leaders and innovative US companies:

  • Arcitecta, a pioneer in data management,
  • BMC Software, a reference in IT operations,
  • Cohesity, a new generation data protection player who just announced its intent to acquire Veritas Technologies backup business,
  • Hammerspace, the fast growing player in file data access,
  • Nimesa, a young company dedicated to make SQL database more resilient,
  • Quantum, the established primary and secondary storage vendor,
  • Qumulo, a leader in scale-out NAS,
  • Solix, a long time player in structured data management,
  • Stonefly, a confidential SMB data storage vendor
  • and WEKA, a leading actor in high performance file storage.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.

Share:

Tuesday, January 02, 2024

Recap of the 52nd edition of The IT Press Tour

Initially posted on StorageNewsletter 15/12/2023
 
The 52nd edition of The IT Press Tour happened recently in Madrid, Spain and it was an opportunity to meet European and American companies, with some famous names already met but also newcomers so globally a good mix of people bringing innovations and new ways to solve and address new IT and storage challenges. During this edition dedicated to cloud, IT infrastructure, networking, security, data management and storage, we met DataCore, Disk Archive, Inspeere, Tiger Technology, XenData and ZettaScale Technology.

DataCore
The team has chosen the event to announce 2 major developments for SANsymphony and Swarm. At the same time, a company update was necessary as the positioning continues to evolve at a rapid pace with acquisitions and active presence in historical adjacent storage domains. It means solutions for the core, edge and cloud environments, with some similar challenges but also radically different ones from primary to secondary storage areas.

DataCore confirms its financial growth and robustness with 14 consecutive years of profitability, not so common in the storage industry. 30% of ARR growth is delivered with 99% recurring revenue. To illustrate this, the southern European region led by Pierre Aguerreberry, signed 201 new customers in the last few months fueled by a loyal channel partners network and a significant product portfolio expansion. As already mentioned, the management team has chosen to go beyond its comfort zone with object and Kubernetes storage solutions plus more recently AI extensions to feed the entire line and even the dedicated business unit, named Perifery, targeting media & entertainment IT and storage needs. This strategy feeds a cross/up- sell model that fuels partners with new products selling to a strong installed base.

First, SANsymphony, a reference in storage virtualization called for several years software-defined storage, will support NVMe over TCP and FC, improve snapshot and CDP rollback with compression, provide extensions for VMware with better vCenter integration and deliver adaptive data placement (ADP) as a new key capability. This core feature optimizes primary storage performance, QoS and cost with auto-tiering and inline deduplication and compression. The block access layer continuously captures and accumulates access info for each data block and thus decides where to place the block within the storage pool. It helps to adopt the right placement decision between 2 blocks accessed at the same time but one of these has been also actively touched previously changing the “temperature” of the block.

On the Swarm side, the main news is the single server approach in fact the containerization of the object storage software orchestrated with Kubernetes. This iteration fits the edge strategy to offer a ready to use and simple S3 storage for relatively small configurations under 100TB. It also means that Swarm can be deployed in different modes now, pure Swarm with clusters on multiple sites potentially but also as smaller configs building a real dispersed network federated by Kubernetes. Other improvements are S3 object locking for additional backup software in fact more a validation and soon object services to automate processing workflows.

Last info regarding both products which will also receive some AI oriented features, AIops for SANsymphony and object services for Swarm.


Disk Archive
Founded in the UK in 2008, Disk Archive is self funded and profitable supporting 450+ customers. The company has designed a cold data storage platform to address long term data archiving needs.

The product name ALTO stands for Alternative to LTO, and clearly promotes the usage of HDDs rather than LTO tapes. ALTO is well adopted in media & entertainment but also in oil and gas and other domains. Alan Hoggarth, CEO and founder, claims to deliver a lower TCO than tapes and tape libraries based solutions with similar capacity and retention times.

One of the dimensions to reduce cost is related to the energy bill. In other words, as an active (powered) media, how to manage the power of HDDs over 10 or 20 years. It’s impossible, not to say stupid, to let the entire disk array up and running over that period of time. You get the idea, Disk Archive leverages the MAID concept – Massive Array of Idle Disks – highly promoted by Copan Systems in the mid 2000’s or later by Nexsan with Auto-MAID. Different iterations have been made on this MAID idea. MAID projects different effects such as a longer life for HDDs proven by the return of experience of Disk Archive and air gap and vault. The team has seen 15 years of lifetime and counting for HDDs with systems deployed in the early days of the company. Globally the power consumption drops to less than 210W per PB.

Leveraging standard software and components, Disk Archive belongs to the SDS category delivered as a couple of hardware and software. Each machine is a 4U chassis with 60 HDDs delivering 1440TB with 24TB disks. Each primary chassis runs CentOS and can manage up to 10 expansion enclosures. A smaller model exists with 24 HDDs slots. The company sells empty systems and users have the choice to pick any 2.5″ or 3.5″ HDDs of their choice or even SSDs. To allow MAID to be effective, it’s important to understand that it is anti-productive to group or unify them in logical volumes or LUNs with logical volume managers or RAID thus creating dependencies on their state. Instead, it has been chosen to manage disks individually with a disk file system on each, here ext4. On the access side, the ALTO node exposes an API and a SMB share via a gateway mode.

A file is entirely written at least 2 times, not segmented at all, to 2 disks in a chassis or across chassis if one of multiple systems are deployed. One copy is also possible if another copy is available outside of Disk Archive managed perimeter. Immediately it means that the maximum file size is limited by the ext4 size on the disk partition but today with high capacity HDDs this model works perfectly largely enough in the vast majority of cases.


Inspeere

Based in France, it was founded in 2019 and raised recently €600,000 to sustain its ambition. The mission is to offer a new way to protect data against cyber threats, data loss or more globally system failure with an innovative backup solution dedicated to edge IT. This product relies on a mix of hardware with the Datis box, a x86 server running Linux equipped with OpenZFS, and a data orchestration and management software layer.

In detail, the team has designed a P2P architecture that links a data source to N similar targets. This dispersed network of machines are all peers, so the company name, and contributes to the robustness of the solution. The source machine snaps, compresses, encrypts, splits, encodes and distributes data chunks to remote systems. Inspeere has developed this data distribution based on Reed Solomon erasure coding (EC). It’s key to notice that data is encrypted at the source before the chunking and distribution phases as the EC model used here is systematic.

Also, the EC supports 32+16 on the paper, meaning a total of 48 peers supporting up to 16 failures or unavailable machines. OpenZFS is paramount here with of course the local data integrity but above all read-only snapshots and replication mechanism. ZFS is a disk file system, so pay attention also to the philosophy of its utilization, Inspeere doesn’t offer a distributed ZFS nor a scale-out one but rather really a way to glue independent ZFS based servers. All Datis entities are autonomous, just connected and maintaining a special network usage.

Inspeere targets SMB entities and the team has realized that 4+2 or 6+2 is largely enough and matches deployments. As Datis boxes are not volatile systems, their availability is high and allow this reduced number of parity chunks. These systems operate as local file servers within each company, serving “classic” data and acting as the backup repository for clients via backup software like Acronis, Atempo, Nakivo, Veeam, Bacula… or others but even tools or OS commands. All Datis boxes store all data versions and protect themselves with the remote peers reaching a new level of data durability.

This approach prevents or delays the purchase of secondary storage and participates in a very efficient data protection TCO and therefore contributes positively to the green and ESG corporate objectives. The solution is obviously certified GDPR and NIS2.

Now, again nothing new is all about execution probably via specific partners targeting vertical needs in some activities.


Tiger Technology
The Bulgarian company has chosen a data resiliency angle addressing the range of data availability and disaster recovery in a hybrid world. Founded 18 years ago, Tiger Technology, today with 70+ employees, is a well known player in file storage coming from a pure on-premises world to hybrid. And the result is significant with a footprint of 11k+ customers essentially in rich content like media and entertainment, surveillance, healthcare but also generic IT.

This market adoption is fueled by Tiger Bridge, acting as an advanced windows based file storage gateway. Users don’t feel any difference between local or cloud files, this is the result of a pretty unique Windows and NTFS integration and expertise.

Hybrid cloud is a reality coming from users who fully jumped into the cloud and started some repatriation to finally adopt a mix configuration and the other side with incremental move to the cloud for some data, workloads and vertical usages. The final landing zone is this hybrid mode with different balanced points for various industries, needs and configurations. Users drive this adoption based on quality of services, flexibility, complexity and above all TCO.

Tiger has promoted for quite some time a model called on-premises first (OPF) with a progressive controlled cloud extension coupled seamlessly to local production sites. The Data gravity dimension is key here with some immediate reality in some applications as we live in a digital world flooded with a real digital data deluge.

Key for edge applications, Tiger Technology identified the need to integrate Tiger Bridge with key vertical needs such as surveillance, healthcare and a few others. And to sustain that strategy and new areas of growth, the management has decided to create new business entities like Tiger Surveillance dedicated to that business and industry segment. In that domain, massive rich media files are captured all day and require local space for constant camera feeds, rapid problem detection aligned with local regulations and quality of service objectives but also extension to cloud object storage for the bulk of the volume.

The company accelerates on this and signs deals after deals with cities, airports and similar entities. For such deployments, data resiliency complements file access methods with DR, CDP and ransomware protection and illustrates why Tiger Bridge is a reference in the domain. The product supports Active/Passive or Active/Active architectures aligned with application requirements and site constraints. In that A/A mode configured locally, mix or in cloud only, airports reach new levels of resiliency critical for daily operations in current life climate.

We expect Tiger to continue in this vertical integration to address IT operations challenges as Tiger Bridge represents a universal answer.


XenData
Launched more than 2 decades ago 9/11/2001, what a date, in the UK by Philip Storey, CEO, and Mark Broatbent, CTO, XenData plays in the active archive data storage category. The mission is to offer a scalable secondary storage platform dedicated to media and entertainment but also for similar needs in other segments. The original idea was simple as it comes from the necessity to write to archive thus tape like applications write to disk. Self-funded, the original team started to design a solution that is today largely adopted with 1500+ installations worldwide. And the team has found its market, the solution fits media & entertainment needs, a huge number of users of removable media like tape but also archive lovers. The company also understood that the success comes with key partnerships with some players already deployed, used and trusted that finally validate a global solution for end-users.

So the concept is to glue a LTO tape library with a disk array both connected to a server and globally this stack operates as an archive destination. But active archive really means that there is no need for external help to access and retrieve data, operations are seamless and available for any users via simple integrated access methods. This is why we see network shares or drive letters on Windows server. The other key aspect is that the server coupled with disk acts as a cache for ingest and retrieve operations and therefore make things more fluid and faster. And obviously for frequently accessed files, the disk zone keeps them longer before reaching tapes. This is covered by the X-Series product line.

Starting as a single node, the configuration can be extended with a multi-node model connected with external disk arrays, tape libraries plus of course cloud. The team has validated Wasabi, Backblaze, Seagate Lyve, and 2 giants, obviously Azure and AWS.

Beyond this device based solution, the team has developed a pure software solution named Cloud File Gateway to sync archiving sites or XenData instances globally.

The most recent product iteration is the E-Series being an object storage. Starting with 280TB and able to grow up to 1.12PB with 4 nodes, the solution is essentially a S3 storage entity confirming what we see on the market that object storage moved from a real distinct architecture to just an interface in favor of users having more flexible choices. Same file based content can be accessed via file system or HTTP methods.

The team offers a preview of its media browser coming soon that allows a rapid access to media content in any resolution that complements partners’ solution.

This XenData approach offers a real interesting model with an integration of multiple storage technologies coupled with cloud with seamless tiering or migration between all these levels.


ZettaScale Technology
Founded in 2022 as a spinout from Adlink Technology, ZettaScale is a pure infrastructure software company developing middleware to set new standards in communication, compute and storage for humans and machines anywhere, at any scale.

The challenge resides in the mix of entities that need to collaborate together in the current complex world with very dispersed entities. To enable this, it is paramount to consider a specific dedicated exchange protocol like the role of IP had and has in the Internet birth, design, growth and ubiquity adoption and presence. Again, this need appears in IoT, edge, automotive, robotics and other advanced devices that need to communicate, exchange data, and potentially process data.

And to be precise on the automotive aspect, the complexity comes from the software integration with huge immediate challenges with the need to process, exchange and store a fast growing data volume. The other key fundamental design requirement is to support the dispersed and decentralized aspect of environments to cover. This is a big change from the classic centrally managed approach not aligned with the new world we live in. We rely today on old protocols with wireless and scalability difficulties plus the dimension of energy.

A solution has been developed, Zenoh, that provides a series of key characteristics and properties such as unification of data in motion, data at rest and computations from very small entities like microcontrollers to data centers. It is an official standard protocol with pending ISO 26262 ASIL D certification. The other core element is the location independence supporting distributed queries. Imagine moving vehicles in cities, the data exchange must be fast, resilient and accurate coming from any vehicles “interacting” with each other and after a car crash, some of them could disappear and be unreachable. Zenoh was built for that and represents the state of the art in the domain. It is written in Rust and offers native libraries and API bindings supporting a wide variety of languages and network technologies with Unix sockets, shared memory, TCP/IP… and even bluetooth or serial. It runs on almost everything i.e Linux, Windows, MacOS or QNX leveraging any topology. Zenoh promotes an universal model with publish/subscribe, remote computation and storage with file system, MinIO, AWS S3, RocksDB or InfluxDB.

ZettaScale has unveiled recently its Zenoh platform that significantly boost the adoption and deployment of Zenoh based projects in various domains: robotics, submarine vessels, heavy mining, drones, logistics, and of course automotive plus we already saw some very promising demonstrations in some of these areas. It also triggered what is called Software Defined Vehicle and served as an open communication backbone. Obviously plenty of oems are interested in this technology that demonstrates a big leap in the category.

Share:

Thursday, December 28, 2023

Inspeere promotes a new P2P backup approach

Inspeere, a french data management startup, joined the recent IT Press Tour, organized in Madrid, Spain, and spent time to explain its data protection mission.

Founded in 2019 in Poitiers, France, with a recent seed round of €600k, the team has designed a P2P backup solution that leverages research work made by its CTO, Olivier Dalle, during his tenure at CNRS at University of Nice Côte d'Azur.

The main idea is to adopt a decentralized architecture with a network of consumers and producers. All participating systems are both a consumer, a source machine generating data, and a producer, a target machine that stores data coming from consumers.

Olivier Dalle, CTO and co-founder


This model means no centralization of data via a server, on a backup device on one site.

Inspeere sells a service represented by a server to be deployed at the source site, where data are produced. This system, named Datis box, is then configured to belong to a P2P network and therefore participate to the global protection. One of the key elements that makes robust this solution is the choice of ZFS disk file system with its open source flavor. And it means that several ZFS functions are available like compression, snapshot and replication beyond a strong data integrity.

The data workflow and process is simple and straightforward. The first step is the local backup made by any tool or even a dedicated backup product. This backup image is then made consistent with a local snapshot that is compressed and then encrypted. At that moment, everything is ready to escape the source machine. Before sending data to other participating machines, an erasure coding (EC) schema based on Reed Solomon is applied to the data and each fragment, data and parity, is sent to ZFS targets with ZFS replication. The EC mode theoretically considers 48 targets, in details 36+12, 36 data chunks plus 12 parity chunks. In practice, it appears that 4+2 or 6+2 models are largely enough with very resilient Datis boxes, up and running all the time with very low failure rates or downtime. Beyond data oriented tasks, network optimization has been made with intelligent bandwidth allocation, named DataSmooth, and advanced load balancing call Savvy. At the end, all backup images are stored locally on each source machines and also dispersed on peer machine.


Inspeere reminds me my project KerStor launched in 2009 at the time we were only a few pioneers like Aetherstore, Wuala, UbiStorage or Symform and a few others. Globally this segment I follow deeply has counted more than 20 players and solutions.

The Inspeere solution is GDPR and NIS2 compliant of course, key in Europe, and obviously a must for a French company. The nature of the solution impacts positively the ESG and green model as secondary storage purchase is delayed or even avoided.

In our current time with high pressure of cyber threats, this dispersed encrypted data fragments approach makes the penetration of the system and the modification of data almost impossible. Now the go-to-market is here very critical as the classic partners can't be really touched considering the secondary storage purchase avoidance.  But a new partner ecosystem is needed to address remote and branch offices, distributed businesses like real estate network, franchise business, regional entities with law firms...

Share:

Thursday, December 21, 2023

DataCore develops new iterations for Swarm

DataCore, a leader in storage management during the last 2 decades, shared its roadmap for Swarm 16, its object storage software coming from the Caringo acquisition effective since early 2021.

The main direction is the iteration with small configurations, less than 100TB, and single containerization instance deployment, managed by Kubernetes, aligned with ROBO and edge needs. It means that Swarm will be able to be deployed with classic clusters, Kubernetes clusters and federation of independent instances and globally be connected together.

The second new key feature will be object services as a local or in-place data processing capabilities as containers also orchestrated by Kubernetes, it represents an important way to leverage core, edge and data center deployment models and address data gravity.

Also the S3 object lock continues to be validated with several backup product, the v16 is certified by Veritas NetBackup. And we'll see how and which AI functions will be added to Swarm for future editions.


At the same time, I have to mention that the company also acquired Object Matrix, a reference in object storage for Media & Entertainment.

The effect of the DataCore strong offering and ambitious strategy is visible with the recent Coldago Map 2023 for Object Storage, with a leader position. The report is available from this page.

Share: