Pour One Out for the Dreamcast, Sega’s Awesome, Quirky, Gone-Too-Soon Console


This site may earn affiliate commissions from the links on this page. Terms of use.

On September 9, 1999, Sega launched the Dreamcast in North America — it’s last, best hope for relevance in the console market. The console, which was intended to put Sega on a more even footing against competitors like Sony, wound up being the company’s hardware swan song. Sega never launched another console — the company’s Genesis Mini, which releases on September 19, is the first Sega-branded hardware to ship in 20 years (not counting the products Tectoy produces in the Brazilian market).

The Dreamcast is a rare example of a platform that failed despite having relatively few weaknesses or flaws relative to other consoles at the time. The N64 wasn’t as popular as Nintendo hoped because the cartridges of the day had limited storage capacity and therefore limited space for detailed textures. Despite these limits, they were also quite expensive compared with CD-based media. The previous Sega console, the Sega Saturn, was difficult to program and had been rushed out the door in an attempt to beat Sony’s PlayStation to market. The original Xbox One was less powerful than the PlayStation 4SEEAMAZON_ET_135 See Amazon ET commerce and debuted with a confused, half-baked marketing strategy that saw Microsoft attempt to launch a new game console by focusing on everything it could do besides gaming, and pour substantial resources into a camera add-on rather than the actual machine.

The Dreamcast, in contrast, was a solid piece of kit. It used a 32-bit two-way superscalar RISC CPU designed by Hitachi, the SH-4, rated for 360 MIPS and clocked at 200MHz. The CPU offered an 8KB instruction cache and 16KB data cache and interfaced with a GPU designed by NEC, the PowerVR2. While reportedly not as powerful as the 3dfx hardware that Sega had originally planned to use for the Dreamcast, the PowerVR solution was an affordable option and an effective one. The Dreamcast was designed to use off-the-shelf components to make it an easier target for developers, but the platform was ahead of its time in several respects.

Dreamcast Controller

The Dreamcast controller, with Video Memory Unit (VMU)

The Dreamcast shipped with a modem at a time when 80 percent of the US population was still using dial-up to get online. It used a GD-ROM format that could hold up to 1GB of data — not as large as DVDs, but more capacity than a typical CD-ROM offered. It offered a memory card that doubled as a miniature gaming device, the Visual Memory Unit. Sega’s overall goal with the Dreamcast was to build excitement around its products in the months before the PlayStation 2 would debut, to give it a leg up on the next-generation competition.

From the beginning, however, the console faced an uphill battle. Retailers who had been burned by short-lived Sega products like the Sega CD or 32X (not to mention the Sega Saturn) were unhappy with the company. Sega had initially intended to use hardware from 3Dfx, but when 3Dfx filed for its own IPO it revealed the Dreamcast before Sega had been prepared to make the announcement. Meanwhile, EA decided not to support the Dreamcast, despite having been a major partner on previous Saturn systems. According to a retrospective on the console, this decision was driven by a host of factors, including the specific component choices Sega made, the company’s indecision over whether to make a modem standard on the entire console range, and Sega’s hardball tactics during licensing may have killed EA’s interest in the platform. A different source in the same article, however, claims that EA walked away from Dreamcast because Sega wouldn’t give it a guaranteed exclusive on all sports’ titles for the console, given that Sega had just purchased a development studio, Visual Concepts, to build these titles.

Sony’s PS2 Marketing Blitz

The other factor that has to be factored into the Dreamcast’s demise is the absolute torrent of marketing Sony unleashed. In September 1999, all eyes were on Sony’s PlayStation 2, still over a year away. In theory, this should have opened a window for the Dreamcast to establish itself. In practice, that didn’t happen. Sony put an all-out marketing blitz behind the PlayStation 2, with its “Emotion Engine.” Sony’s reputation, by this point, was also better. The company had shipped one massive hit, the original PlayStation. Sega, in contrast, had shipped a number of half-baked, expensive flops. The Sega Saturn debacle was only part of the problem. The Sega CD and Sega 32X — both Genesis / Mega Drive add-ons — had failed to impress the market. Handheld products like the Sega Nomad had flopped.

If you were on the fence between Sega and Sony in the late 1990s, Sony looked like the safer bet. Sega’s Dreamcast enjoyed a very strong North American launch, but sales dropped off as the PS2’s launch date approached. Sony had the deep pockets to dramatically outspend Sega in terms of marketing dollars, while Sega was losing money despite brisk hardware sales. It cut Dreamcast prices to boost demand, but that meant taking a loss on the platform. While the attach rate for games was reportedly high, the install base wasn’t large enough for the company to achieve profitability this way. By the time the PS2 actually launched, Sega was hemorrhaging cash. Unable to compete with the PS2, Sega threw in the towel on hardware manufacturing altogether.

Image credit: TheDreamcastJunkyard, which has additional screenshots of comparisons between PS2 and Dreamcast visuals in Ferrari F355 Challenge, for the curious.

Compare Dreamcast and PlayStation 2 games today, and it’s clear that the gap between them wasn’t as large as Sony wanted it to seem. Sega Retro notes:

Compared to the rival PlayStation 2, the Dreamcast is more effective at textures, anti-aliasing, and image quality, while the PS2 is more effective at polygon geometry, physics, particles, and lighting. The PS2 has a more powerful CPU geometry engine, higher translucent fillrate, and more main RAM (32 MB, compared to Dreamcast’s 16 MB), while the DC has more VRAM (8 MB, compared to PS2’s 4 MB), higher opaque fillrate, and more GPU hardware features, with CLX2 capabilities like tiled rendering, super-sample anti-aliasing, Dot3 normal mapping, order-independent transparency, and texture compression, which the PS2’s GPU lacks.

Today, the Dreamcast is remembered for the uniqueness of its game library. In addition to absolutely stunning arcade ports like Soul Calibur, the Dreamcast had Phantasy Star Online, which was the first online console MMORPG. Games like Shenmue are considered to be progenitors of the open-world approach favored by long-running series like Grand Theft Auto (which itself began life as a top-down game, not a 3D, open-world, third-person title). Games like the cel-shaded Jet Set Radio and Crazy Taxi established the Dreamcast at a platform willing to take chances with game design. Titles like Skies of Arcadia offered players the chance to be sky pirates. Games like Seaman were… really weird.

Really, really weird.

Sometimes, the issues that sink a console are technical. Sometimes, the hardware is fine and it’s everything else that goes wrong. Here’s to one of the short-lived champions of a bygone age — and a more daring era in gaming, when developers and AAA publishers took more chances with quirky titles than they do today.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

How tech is transforming the intelligence industry – gpgmail


At a conference on the future challenges of intelligence organizations held in 2018, former Director of National Intelligence Dan Coats argued that he transformation of the American intelligence community must be a revolution rather than an evolution. The community must be innovative and flexible, capable of rapidly adopting innovative technologies wherever they may arise.

Intelligence communities across the Western world are now at a crossroads: The growing proliferation of technologies, including artificial intelligence, Big Data, robotics, the Internet of Things, and blockchain, changes the rules of the game. The proliferation of these technologies – most of which are civilian, could create data breaches and lead to backdoor threats for intelligence agencies. Furthermore, since they are affordable and ubiquitous, they could be used for malicious purposes.

The technological breakthroughs of recent years have led intelligence organizations to challenge the accepted truths that have historically shaped their endeavors. The hierarchical, compartmentalized, industrial structure of these organizations is now changing, revolving primarily around the integration of new technologies with traditional intelligence work and the redefinition of the role of the humans in the intelligence process.

Take for example Open-Source Intelligence (OSINT) – a concept created by the intelligence community to describe information that is unclassified and accessible to the general public. Traditionally, this kind of information was inferior compared to classified information; and as a result, the investments in OSINT technologies were substantially lower compared to other types of technologies and sources. This is changing now; agencies are now realizing that OSINT is easy to acquire and more beneficial, compared to other – more challenging – types of information.

Yet, this understanding trickle down solely, as the use of OSINT by intelligence organizations still involves cumbersome processes, including slow and complex integration of unclassified and classified IT environments. It isn’t surprising therefore that intelligence executives – for example the Head of State Department’s Intelligence Arm or the nominee to become the Director of the National Reconnaissance Office – recently argued that one of the community’s grandest challenges is the quick and efficient integration of OSINT in its operations.

Indeed, technological innovations have always been central to the intelligence profession. But when it came to processing, analyzing, interpreting, and acting on intelligence, however, human ability – with all its limitations – has always been considered unquestionably superior. That the proliferation of data and data sources are necessitating a better system of prioritization and analysis, is not questionable. But who should have a supremacy? Humans or machines?

A man crosses the Central Intelligence Agency (CIA) seal in the lobby of CIA Headquarters in Langley, Virginia, on August 14, 2008. (Photo: SAUL LOEB/AFP/Getty Images)

Big data comes for the spy business

The discourse is tempestuous. Intelligence veterans claim that there is no substitute for human judgment. They argue that artificial intelligence will never be capable of comprehending the full spectrum of considerations in strategic decision-making, and that it cannot evaluate abstract issues in the interpretation of human behavior. Machines can collect data and perhaps identify patterns, but they will never succeed in interpreting reality as do humans. Others also warn of the ethical implications of relying on machines for life-or-death situations, such as a decision to go to war.

In contrast, techno-optimists claim that human superiority, which defined intelligence activities over the last century, is already bowing to technological superiority. While humans are still significant, their role is no longer exclusive, and perhaps not even the most important in the process. How can the average intelligence officer cope with the ceaseless volumes of information that the modern world produces?

From 1995 to 2016, the amount of reading required of an average US intelligence researcher, covering a low-priority country, grew from 20,000 to 200,000 words per day. And that is just the beginning. According to forecasts, the volume of digital data that humanity will produce in 2025 will be ten times greater than is produced today. Some argue this volume can only be processed – and even analyzed – by computers.

Of course, the most ardent advocates for integration of machines into intelligence work are not removing human involvement entirely; even the most skeptical do not doubt the need to integrate artificial intelligence into intelligence activities. The debate centers on the question of who will help whom: machines in aid of humans or humans in aid of machines.

Most insiders agree that the key to moving intelligence communities into the 21st century lies in breaking down inter- and intra-organizational walls, including between
the services within the national security establishment; between the public sector, the private sector, and academia; and between intelligence services of different countries.

It isn’t surprising therefore that the push toward technological innovation is a part of the current intelligence revolution. The national security establishment already recognizes that the private sector and academia are the main drivers of technological innovation.

Alexander Karp, chief executive officer and co-founder of Palantir Technologies Inc., walks the grounds after the morning sessions during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, U.S., on Thursday, July 7, 2016. Billionaires, chief executive officers, and leaders from the technology, media, and finance industries gather this week at the Idaho mountain resort conference hosted by investment banking firm Allen & Co. Photographer: David Paul Morris/Bloomberg via Getty Images

Private services and national intelligence

In the United States there is dynamic cooperation between these bodies and the security community, including venture capital funds jointly owned by the government and private companies.

Take In-Q-Tel – a venture capital fund established 20 years ago to identify and invest in companies that develop innovative technology which serves the national security of the United States, thus positioning the American intelligence community at the forefront of technological development. The fund is an independent corporation, which is not subordinate to any government agency, but it maintains constant coordination with the CIA, and the US government is the main investor.

It’s most successful endeavor, which has grown to become a multi-billion company though somewhat controversial, is Palantir, a data-integration and knowledge management provider. But there are copious other startups and more established companies, ranging from sophisticated chemical detection (e.g. 908devices), automated language translations (e.g. Lilt), and digital imagery (e.g. Immersive Wisdom) to sensor technology (e.g. Echodyne), predictive analytics (e.g. Tamr) and cyber security (e.g. Interset).

Actually, a significant part of intelligence work is already being done by such companies, small and big. Companies like Hexagon, Nice, Splunk, Cisco and NEC offer intelligence and law enforcement agencies a full suite of platforms and services, including various analytical solutions such as video analytics, identity analytics, and social media analytics . These platforms help agencies to obtain insights and make predictions from the collected and historic data, by using real-time data stream analytics and machine learning. A one-stop-intelligence-shop if you will.

Another example of government and non-government collaboration is the Intelligence Advanced Research Projects Activity (IARPA) – a nonprofit organization which reports to the Director of National Intelligence (DNI). Established in 2006, IARPA finances advanced research relevant to the American intelligence community, with a focus on cooperation between academic institutions and the private sector, in a broad range of technological and social sciences fields. With a relatively small annual operational budget of around $3bn, the fund gives priority to multi-year development projects that meet the concrete needs of the intelligence community. The majority of the studies supported by the fund are unclassified and open to public scrutiny, at least until the stage of implementation by intelligence agencies.

Image courtesy of Bryce Durbin/gpgmail

Challenging government hegemony in the intelligence industry 

These are all exciting opportunities; however, the future holds several challenges for intelligence agencies:

First, intelligence communities lose their primacy over collecting, processing and disseminating data. Until recently, the organizations Raison D’etre was, first and foremost, to obtain information about the enemy, before said enemy could disguise that information.

Today, however, a lot of information is available, and a plethora of off-the-shelf tools (some of which are free) allow all parties, including individuals, to collect, process and analyze vast amounts of data. Just look at IBM’s i2 Analyst’s Notebook, which gives analysts, for just few thousand dollars, multidimensional visual analysis capabilities so they can quickly uncover hidden connections and patterns in data. Such capacities belonged, just until recently, only to governmental organizations.

A second challenge for intelligence organizations lies in the nature of the information itself and its many different formats, as well as in the collection and processing systems, which are usually separate and lacking standardization. As a result, it is difficult to merge all of the available information into a single product. For this reason, intelligence organizations are developing concepts and structures which emphasize cooperation and decentralization.

The private market offers a variety of tools for merging information; ranging from simple off-the-shelf solutions, to sophisticated tools that enable complex organizational processes. Some of the tools can be purchased and quickly implemented – for example, data and knowledge sharing and management platforms – while others are developed by the organizations themselves to meet their specific needs.

The third challenge relates to the change in the principle of intelligence prioritization. In the past, the collection of information about a given target required a specific decision to do so and dedicated resources to be allocated for that purpose, generally at the expense of allocation of resources to a different target. But in this era of infinite quantities of information, almost unlimited access to information, advanced data storage capabilities and the ability to manipulate data, intelligence organizations can now collect and store information on a massive scale, without the need to immediately process it – rather, it may be processed as required.

This development leads to other challenges, including: the need to pinpoint the relevant information when required; to process the information quickly; to identify patterns and draw conclusions from mountains of data; and to make the knowledge produced accessible to the consumer. It is therefore not surprising that most of the technological advancements in the intelligence field respond to these challenges, bringing together technologies such as big data with artificial intelligence, advanced information storage capabilities and advanced graphical presentation of information, usually in real time.

Lastly, intelligence organizations are built and operate according to concepts developed at the peak of the industrial era, which championed the principle of the assembly line, which are both linear and cyclical. The linear model of ​​the intelligence cycle – collection, processing, research, distribution and feedback from the consumer – has become less relevant. In this new era, the boundaries between the various intelligence functions and between the intelligence organizations and their eco-system are increasingly blurred.

 

The brave new world of intelligence

A new order of intelligence work is therefore required, and therefore intelligence organizations are currently in the midst of a redefinition process. Traditional divisions – e.g. between collection and research; internal security organizations and positive intelligence; and public and private sectors – all become obsolete. This is not another attempt to carry out structural reforms: there is a sense of epistemological rupture which requires a redefinition of the discipline, the relationships that intelligence organizations have with their environments – from decision makers to the general public – and the development of new structures and conceptions.

And of course, there are even wider concerns; legislators need to create a legal framework that accurately incorporates the assessments based on data in a way that takes the predictive aspects of these technologies into account and still protects the privacy and security rights of individual citizens in nation states that have a respect for those concepts.

Despite the recognition of the profound changes taking place around them, today’s intelligence institutions are still built and operate in the spirit of Cold War conceptions. In a sense, intelligence organizations have not internalized the complexity that characterizes the present time – a complexity which requires abandoning the dichotomous (within and outside) perception of the intelligence establishment, as well as the understanding of the intelligence enterprise and government bodies as having a monopoly on knowledge; concepts that have become obsolete in an age of decentralization, networking and increasing prosperity.

Although some doubt the ability of intelligence organizations to transform and adapt themselves to the challenges of the future, there is no doubt that they must do so in this era in which speed and relevance will determine who prevails.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

At A Glance: NEC MultiSync EA271F-BK Review


This site may earn affiliate commissions from the links on this page. Terms of use.

In many ways, NEC can be considered Japan’s Dell or IBM. The company has a long history in the tech world and has been known for making a wide-range of products from processors and high-end business equipment to PCs and home game consoles. The company is also known for making business displays such as the NEC MultiSync EA271F-BK, which will be the subject of today’s review, but this product gives an impression that the company is a bit out of touch with the modern market.

Design

NEC’s MultiSync EA271F-BK is a 27-inch display with an IPS panel that has a native resolution of 1920×1080. One of the more attractive aspects of this display is it’s bezels, which are almost nonexistent at just 1mm thick on the top and sides of the screen. The bezel at the bottom is somewhat larger, but this excusable as the display controls are located here. 

Our sister site, PCMag, received one of these monitors for testing purposes and tested its color accuracy using a Klein K10-A colorimeter and the SpectraCAL CalMan 5 software utility. These tests showed that panel was actually able to reproduce 96.3 percent of the sRGB color spectrum, which exceeds NEC’s rating of 95 percent and sets the MultiSync EA271F-BK above the average 1080p display in terms of color accuracy.

The MultiSync EA271F-BK also features an ergonomic stand that permits you to turn the display into portrait mode, landscape mode or somewhere between the two. There are also two USB 3.0 ports and audio connections on this display that makes it easy to connect headphones and USB devices.

This display comes with a pair of built-in speakers that PCMag reported as being reasonably loud but slightly distorted at high volume levels. These should be sufficient for most tasks, but if you really need to clearly hear something it’s best that you turn to a pair of headphones or dedicated speakers for better audio quality.

Conclusion

All things considered, NEC’s MultiSync EA271F-BK appears to be a solid and well rounded 1080p 27-inch monitor, and if it was 2008 I’d probably give it a near perfect rating. As it’s 2019 and NEC placed the MSRP on this display at $419, however, that’s just not going to happen. The street price for this display is actually even higher at the moment at $549.09. The current market is flooded with 2K and 4K displays that offer better features and competitive prices that NEC’s MultiSync EA271F-BK simply can’t compete with.

Samsung’s 27-inch GHG70 for example is curved with a resolution of 2,560×1440 and it features both HDR and Quantum Dot technology that enable it to reproduce up to 125 percent of the sRGB color gamut and 92 percent of the Adobe RGB color space. This Samsung display is slightly more expensive at $469.99, but it offers significantly better value for the price. There is also Dell’s 27-inch S2719DC display that also has a resolution of 2,560×1440 and the ability to display 99 percent of the sRGB color spectrum, and it currently costs less than the NEC at just $399.99.

Due in large to its noncompetitive price tag, I’d rate this display a mere 2.5 out of 5. NEC simply can’t expect it to compete with the other displays in this price range with lower-end specs.

 




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something