FICO Survey: Real-time Payments Platforms Have Increased Fraud Losses for 4 out of 5 APAC Banks- Tempemail – Blog – 10 minute

The proliferation of real-time payments platforms, including person-to-person (P2P) transfers and mobile payment platforms across Asia Pacific, has increased fraud losses for the majority of banks. Silicon Valley analytics firm FICO recently conducted a survey with banks in the region and found that 4 out of 5 (78 percent) have seen their fraud losses increase.
Further to this, almost a quarter (22 percent) say that fraud will rise significantly in the next 12 months, with an additional 58 percent saying they expect a moderate rise in fraud.
“While the convenience of real-time payments is great news for customers, increasingly, banks have zero time to clear a transaction or payment. AI can’t slow down the clock, but it can help create systems that are radically quicker to recognize a transaction that smells likely to be fraudulent,” said Dan McConaghy, president of FICO in Asia Pacific. “Banks will need to move beyond passwords and OTPs and add biometrics, device telemetry and customer behavior analytics to keep up with the changing payments landscape.”
When asked which identity and authentication strategies they used, the majority of APAC banks have a strategy of multifactor authentication (84 percent).  They increasingly use a wide range of authentication methods including: biometrics (64 percent), normal passwords (62 percent) and in last place behavioral authentication (38 percent). Interestingly, nearly half of the respondents (46 percent) are currently only using 1 or 2 of these strategies, potentially leaving them more exposed to attack vectors such as identity theft, account takeovers, cyberattacks.
“Why try to crack a safe when you can walk in the front door?” explained McConaghy. “Criminals are trying to fool banks into thinking they are new customers or stealing account access by tricking people into making security mistakes or giving away sensitive information. When they are successful, criminals are making use of real-time payments to move funds quickly through a maze of global accounts.”
The survey bore this out with 40 percent of banks in FICO’s survey naming social engineering as the number one fraud concern when it comes to real-time payments. Account takeovers were ranked second, with false accounts and money mules also rated as problems.
New forms of biometric, multifactor and behavioral technologies allow banks to stop payments being made, even if an account appears to be using the correct but stolen password or entering the right, but intercepted, one-time-password.
“Beyond this type of account take over, we also have authorized push payment fraud, such as when a customer is tricked into paying what they think is a legitimate invoice like a fake school bill or payment to a tradesperson,” said McConaghy. “This type of social engineering is harder to stop but better KYC, link analysis to find money mule accounts and behavioral analytics to flag new accounts for a regular payee, are all examples of how to tackle it.”
Further to stopping fraud in real-time payment platforms, crimes such as drug trafficking, human smuggling, tax evasion and terrorism finance are also attracted to the irrevocable nature of instant payments. The lack of visibility between jurisdictions has seen regulators encouraging banks to move quickly in this cross-border payments space to ensure payments are compliant and secure.
In terms of mitigating this criminal behavior, more than 90 percent of APAC banks surveyed thought that convergence between their fraud and compliance functions would be helpful in defending transactions on real-time payments platforms.
“We estimate that there is about an 80 percent overlap in software functionality between legacy fraud and anti-money laundering systems,” added McConaghy “To tackle fraud and money laundering schemes that exploit real-time money movement you need to leverage all the available technologies, automate as much as you can and introduce models that can identify outlier transactions and customer behavior so your teams can spend their time investigating the riskiest of the red flags.”
FICO surveyed 45 executives from financial institutions across the region at its annual FICO Asia Pacific Fraud Forum.

If you have an interesting article / experience / case study to share, please get in touch with us at [email protected]

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Survey: New research shows it’s a hybrid and multi-cloud world – Blog – 10 minute

The big picture: At this point, virtually anyone who follows the tech industry in even the most casual way has probably heard, not only about the influence of cloud computing, but also the impact of what is commonly called “multi-cloud.” What many don’t know, however, is the specifics of how much companies are using these cloud computing resources, what types of workloads they’re running in the cloud, why they chose to use cloud services, and much more.
A new research study, initiated by TECHnalysis Research, dove into all those details. It began with a survey of 600 US-based businesses (200 medium-sized companies with 100-999 employees and 400 large enterprises with 1,000 employees or more) who were users of cloud computing services. The results show that today’s cloud computing environment is an incredibly dense, rich tapestry of different workloads at various maturity levels running in different locations on different underlying platforms for many different reasons.
The basic idea with multi-cloud is that companies use multiple different cloud computing options as part of their overall computing environment. In some cases, that could mean using multiple public cloud providers, such as Amazon’s AWS, Microsoft’s Azure, and Google’s Cloud Platform (GCP), or it could mean they’re using one public cloud provider and one or more “private” or “hybrid” clouds, or some combination of all the above.
Private cloud refers to computing environments that use the same basic types of flexible technologies and software platforms that public clouds offer but do so either within the company’s own data center or in what’s called a “hosted environment.” These hosted environments are external sites that house the physical resources (servers, storage, networking equipment, etc.) necessary to run computer workloads from multiple different companies simultaneously.
Typically, these locations—which are sometimes called co-located sites or “colos” for short—provide power, strong physical security, and most importantly, high-speed connections to large telecommunication networks or other network service providers. Unlike with public cloud companies, however, the physical assets (and the workloads) at these sites remain under the control of the company requesting the service.
Hybrid cloud refers to environments that mix some element of public cloud computing providers with private and/or managed/hosted providers either within the data center or at a co-located site.
What the study found is that for companies like those surveyed, who have been using cloud computing for several years now, approximately 30% of today’s workloads are being run in the public cloud, another 30% are legacy applications still being run in the corporate data center, and the remaining 40% are a combination of private and hybrid cloud workloads, as Fig. 1 illustrates.

Fig. 1
Interestingly, when asked what companies expected the mix to look like in 18-24 months, the results weren’t significantly different, with about a 5% drop in legacy workloads and about a 2.5% increase each for public cloud and private/hybrid cloud workloads, suggesting the transition to new cloud-based workloads has slowed for many of these organizations.
In addition to this diversity of high-level workload types, the study showed a large number of options being used within each of those groups. On average, for example, survey respondents were using 3.1 different public cloud providers across both IaaS (Infrastructure as a Service—typically access to the raw computing resources of a public cloud provider) and PaaS (Platform as a Service—adding software and services on top of the raw hardware) offerings. Of the nearly 87% of respondents who said they were running a private cloud of some type, they averaged 1.6 different private cloud platforms.
When it came to specific workload counts, companies averaged 3.4 workloads per public cloud provider and 2.9 workloads for private and hybrid clouds. Doing the math, that means organizations like the ones who participated in the survey typically have over 15 cloud-based workloads that they’re using. On top of that, survey respondents deployed a number of SaaS (Software as a Service) cloud-based applications as well. These include Microsoft’s Office 365, Google’s G Suite, Salesforce, and many others, and the average per company worked out to 3.7. As a result, today’s US businesses are balancing nearly 19 cloud-based applications/workloads as part of their computing environment, as the table shows below.

The reasons for moving all these different workloads to the cloud vary quite a bit by the specific type of workload, but looking at the weighted totals across all the various types and locations provides some interesting, though not terribly surprising, insights into the rationale that organizations are using to make the move to migrate or rebuild existing applications, or create new ones in the cloud. (Speaking of which, companies said that approximately 1/3 of their cloud-based applications fit into each of these three categories: migrate, or “lift and shift,” rebuild, or “refactor,” and build new.)
The top reasons that survey respondents gave for migrating workloads to the cloud are to improve performance, to increase security, and because of the need to modernize applications. Cost savings actually came in fourth. Ironically, the top reasons those same companies cited for not moving some of their applications to the cloud were very similar: security concerns, performance challenges, regulatory requirements and costs. These dichotomies highlight the ongoing challenges and opposing forces that are a regular part of the modern cloud computing landscape.
There’s no doubt that cloud computing, in all its various forms, will continue to be a critical part of business computing environments for some time to come. Making sense of how experienced companies are approaching it can help vendors optimize their offerings and other businesses find their way through the often very confusing cloud computing world.
You can download a free copy, in PDF format, of the highlights of the TECHnalysis Research Hybrid and Multi-Cloud Study here. The full version is available for purchase.
Bob O’Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting and market research firm. You can follow him on Twitter @bobodtech. This article was originally published on Tech.pinions.

Related Reads

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Simplified Chinese is the most popular language on Steam, hardware survey claims – Blog – 10 minute

Bottom line: PC gaming is big business in China. So much so that Valve recently announced its decision to develop a China-specific version of its popular PC gaming-oriented digital distribution platform, Steam. Until that version arrives, though, many Chinese gamers are sticking to Steam proper, if the platform’s latest hardware survey is anything to go by.
The data collected in Steam’s monthly survey reveals that Simplified Chinese is now the most popular language on the platform. More specifically, the language now represents 37.87 percent of Steam users (or, at least, those who participated in December’s survey).
Notably, that number was much smaller at 23.44 percent in November, which formerly put it in second place in the language popularity rankings. Why December saw such a massive increase (roughly 14 percent) in this area is tough to say. Still, for one reason or another, an increasing number of Chinese PC gamers are checking out Steam’s content library lately. Of course, not every user with their language set to Chinese is necessarily a resident of China, but it’s probably the case for many.
Regardless, the second most popular Steam language in December was English, which is no surprise. English-speaking Steam users represent 30.43 (down from 36.83 percent in November) of the survey-participating playerbase, with Russian trailing behind in third place at 9.36 percent.

Most of Steam’s other survey statistics aren’t particularly exciting or newsworthy. Nvidia’s GTX 1060 continues to dominate the Steam arena with 20.3 percent of the pie (80.51 percent of users own an Nvidia card, 11 percent are on the Red Team), and 1920x1080p is still the most popular gaming resolution for “primary” monitors.
However, there was one other interesting tidbit we wanted to draw attention to: Windows 10’s popularity among Steam users dropped by a whopping 13.14 percent in December, bringing the number down to 61.09 percent. Windows 7 made up the difference with a 14.57 percent uptick in usage, adding up to 33.04 percent in total.
It’s unclear why Windows 7 appears to be getting more popular among Steam players; especially given how close it is to its end-of-support date on January 14. This situation could be a fluke, or it could be a true sign of a small-scale reverse OS migration. If you have any theories, feel free to drop them in the comments. If not, be sure to check out Steam’s December hardware survey results for yourself right here.

Related Reads

10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes.Tempemail.co – is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something anonymously on Internet.

Tempemail , Tempmail Temp email addressess (10 minutes emails)– When you want to create account on some forum or social media, like Facebook, Reddit, Twitter, TikTok you have to enter information about your e-mail box to get an activation link. Unfortunately, after registration, this social media sends you dozens of messages with useless information, which you are not interested in. To avoid that, visit this Temp mail generator: tempemail.co and you will have a Temp mail disposable address and end up on a bunch of spam lists. This email will expire after 10 minute so you can call this Temp mail 10 minute email. Our service is free! Let’s enjoy!

Survey: Many AMD Ryzen 3000 CPUs Don’t Hit Full Boost Clock


Overclocker Der8auer has published the results of a survey of more than 3,000 Ryzen 7nm owners who have purchased AMD’s new CPUs since they went on sale in July. Last month, reports surfaced that the Ryzen 3000 family weren’t hitting their boost clocks as well as some enthusiasts expected. Now, we have some data on exactly what those figures look like.

There are, however, two confounding variables. First, Der8auer had no way to sort out which AMD users had installed Windows 1903 and were using the most recent version of the company’s chipset drivers. AMD recommends both to ensure maximum performance and desired boost behavior. Der8auer acknowledges this but believes the onus is on AMD to communicate with end-users regarding the need to use certain Windows versions to achieve maximum performance.

Second, there’s the fact that surveys like this tend to be self-selecting. It’s possible that only the subset of end-users who aren’t seeing the performance they desire will respond in such a survey. Der8auer acknowledges this as well, calling it a very valid point, but believes that his overall viewing community is generally pro-AMD and favorably inclined towards the smaller CPU manufacturer. The full video can be seen below; we’ve excerpted some of the graphs for discussion.

Der8auer went over the data from the survey thoroughly in order to throw out results that didn’t make sense or were obviously submitted in bad faith. He compiled data on the 3600, 3600X, 3700X, 3800X, and 3900X.SEEAMAZON_ET_135 See Amazon ET commerce Clock distributions were measured at up to two deviations from the mean. Maximum boost clock was tested using Cinebench R15’s single-threaded test, as per AMD’s recommendation.

Der8auer-3600

Data and chart by Der8auer. Click to enlarge

In the case of the Ryzen 7 3600, 49.8 percent of CPUs hit their boost clock of 4.2GHz, as shown above. As clocks rise, however, the number of CPUs that can hit their boost clock drops. Just 9.8 percent of 3600X CPUs hit their 4.4GHz. The 3700X’s chart is shown below for comparison:

Data and chart by Der8auer. Click to enlarge

The majority of 3700X CPUs are capable of hitting 4.375GHz, but the 4.4GHz boost clock is a tougher leap. The 3800X does improve on these figures, with 26.7 percent of CPUs hitting boost clock. This seems to mirror what we’ve heard from other sources, which have implied that the 3800X is a better overclocker than the 3700X. The 3900X struggles more, however, with just 5.6 percent of CPUs hitting their full boost clock.

We can assume that at least some of the people who participated in this study did not have Windows 10 1903 or updated AMD drivers installed, but AMD users had the most reason to install those updates in the first place, which should help limit the impact of the confounding variable.

The Ambiguous Meaning of ‘Up To’

Following his analysis of the results, Der8auer makes it clear that he still recommends AMD’s 7nm Ryzen CPUs with comments like “I absolutely recommend buying these CPUs.” There’s no ambiguity in his statements and none in our performance review. AMD’s 7nm Ryzen CPUs are excellent. But an excellent product can still have issues that need to be discussed. So let’s talk about CPU clocks.

The entire reason that Intel (who debuted the capability) launched Turbo Boost as a product feature was to give itself leeway when it came to CPU clocks. At first, CPUs with “Turbo Boost” simply appeared to treat the higher, optional frequency as their effective target frequency even when under 100 percent load. This is no longer true, for multiple reasons. CPUs from AMD and Intel will sometimes run at lower clocks depending on the mix of AVX instructions. Top-end CPUs like the Core i9-9900K may throttle back substantially when under full load for a sustained period of time (20-30 seconds) if the motherboard is configured to use Intel default power settings.

In other realms, like smartphones, it is not necessarily unusual for a device to never run at maximum clock. Smartphone vendors don’t advertise base clocks at all and don’t provide any information about sustained SoC clock under load. Oftentimes it is left to reviewers to typify device behavior based on post-launch analysis. But CPUs from both Intel and AMD have typically been viewed as at least theoretically being willing capable of hitting boost clock in some circumstances.

The reason I say that view is “theoretical” is that we see a lot of variation in CPU behavior, even over the course of a single review cycle. It’s common for UEFI updates to arrive after our testing has already begun. Oftentimes, those updated UEFIs specifically fix issues with clocking. We correspond with various motherboard manufacturers to tell them what we’ve observed and we update platforms throughout the review to make certain power behavior is appropriate and that boards are working as intended. When checking overall performance, however, we tend to compare benchmark results against manufacturer expectations as opposed to strictly focusing on clock speed (performance, after all, is what we are attempting to measure). If performance is oddly low or high, CPU and RAM clocks are the first place to check.

It’s not unusual, however, to be plus-or-minus 2-3 percent relative to either the manufacturer or our fellow reviewers, and occasional excursions of 5-7 percent may not be extraordinary if the benchmark is known for producing a wider spread of scores. Some tests are also more sensitive than others to RAM timing, SSD speed, or a host of other factors.

Now, consider Der8auer’s data on the Ryzen 9 3900X:

Der8auer-3900X

Image and data by Der8auer. Click to enlarge

Just 5 percent of the CPUs in the batch are capable of hitting 4.6GHz. But a CPU clocked at 4.6GHz is just 2 percent faster than a CPU clocking in at 4.5GHz. A 2 percent gap between two products is close enough that we call it an effective tie. If you were to evaluate CPUs strictly on the basis of performance, with a reasonable margin of say, 3 percent, you’d wind up with an “acceptable” clock range of 4,462MHz – 4,738MHz (assuming a 1:1 relationship between CPU clock and performance). And if you allow for that variance in the graphs above, a significantly larger percentage — though no, not all — of AMD CPUs “qualify” as effectively reaching their top clock.

On the other hand, 4.5GHz or below is factually not 4.6GHz. There are at least two meaningfully different ways to interpret the meaning of “up to” in this context. Does “up to X.XGHz” mean that the CPU will hit its boost clock some of the time, under certain circumstances? Or does it mean that certain CPUs will be able to hit these boost frequencies, but that you won’t know if you have one or not? And how much does that distinction matter, if the overall performance of the part matches the expected performance that the end-user will receive?

Keep in mind that one thing these results don’t tell us is what overall performance looks like across the entire spread of Ryzen 7 CPUs. Simply knowing the highest boost clock that the CPU hits doesn’t show us how long it sustained that clock. A CPU that holds a steady clock of 4.5GHz from start to finish will outperform a CPU that bursts to 4.6GHz for one second and drops to 4.4GHz to finish the workload. Both of these behaviors are possible under an “up to” model.

Manufacturers and Consumers May See This Issue Differently

While I don’t want to rain on his parade or upcoming article, we’ve spent the last few weeks at ET troubleshooting a laptop that my colleague David Cardinal recently bought. Specifically, we’ve been trying to understand its behavior under load when both the CPU and GPU are simultaneously in-use. Without giving anything away about that upcoming story, let me say this: The process has been a journey into just how complicated thermal management is now between various components.

Manufacturers, I think, increasingly look at power consumption and clock speed as a balancing act in which performance and power are allocated to the components where they’re needed and throttled back everywhere else. Increased variability is the order of the day. What I suspect AMD has done, in this case, is set a performance standard that it expects its CPUs to deliver rather than a specific clock frequency target. If I had to guess at why the company has done this, I would guess that it’s because of the intrinsic difficulties of maintaining high clock speeds at lower process nodes. AMD likely chose to push the envelope on its clock targets because it made the CPUs compare better against their Intel equivalents as far as maximum clock speeds were concerned. Any negative response from critics would be muted by the fact that these new CPUs deliver marked benefits over both previous-generation Ryzen CPUs and their Intel equivalents at equal price points.

Was that the right call? I’m not sure. This is a situation where I genuinely see both sides of the issue. The Ryzen 3000 family delivers excellent performance. But even after allowing for variation caused by Windows version, driver updates, or UEFI issues on the part of the manufacturer, we don’t see as many AMD CPUs hitting their maximum boost clocks as we would expect, and the higher-end CPUs with higher boost clocks have more issues than lower-end chips with lower clocks. AMD’s claims of getting more frequency out of TSMC 7nm as compared with GF 12/14nm seem a bit suspect at this point. The company absolutely delivered the performance gains we wanted, and the power improvements on the X470 chipset are also very good, but the clocking situation was not detailed the way it should have been at launch.

There are rumors that AMD supposedly changed boost behavior with recent AGESA versions. Asus employee Shamino wrote:

i have not tested a newer version of AGESA that changes the current state of 1003 boost, not even 1004. if i do know of changes, i will specifically state this. They were being too aggressive with the boost previously, the current boost behavior is more in line with their confidence in long term reliability and i have not heard of any changes to this stance, tho i have heard of a ‘more customizable’ version in the future.

I have no specific knowledge of this situation, but this would surprise me. First, reliability models are typically hammered out long before production. Companies don’t make major changes post-launch save in exceptional circumstances, because there is no way to ensure that the updated firmware will reach the products that it needs to reach. When this happens, it’s major news. Remember when AMD had a TLB bug in Phenom? Second, AMD’s use of Adaptive Frequency and Voltage Scaling is specifically designed to adjust the CPU voltage internally to ensure clock targets are hit, limiting the impact of variability and keeping the CPU inside the sweet spot for clock.

I’m not saying that AMD would never make an adjustment to AGESA that impacted clocking. But the idea that the company discovered a critical reliability issue that required it to make a subtle change that reduced clock by a mere handful of MHz in order to protect long-term reliability doesn’t immediately square with my understanding of how CPUs are designed, binned and tested. We have reached out to AMD for additional information.

I’m still confident and comfortable recommending the Ryzen 3000 family because I’ve spent a significant amount of time with these chips and seen how fast they are. But AMD’s “up to” boost clocks are also more tenuous than we initially knew. It doesn’t change our expectation of the part’s overall performance, but the company appears to have decided to interpret “up to” differently this cycle than in previous product launches. That shift should have been communicated. Going forward, we will examine both Intel and AMD clock behavior more closely as a component of our review coverage.

Now Read:




10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something

Employee survey startup Culture Amp closes $82M round led by Sequoia China – gpgmail


Each unhappy startup may be unhappy in its own way, but there’s still wisdom in understanding what drives employee satisfaction and dissatisfaction across companies.

Culture Amp is just one the companies aiming to help employees anonymously express how they feel about their place of work, but the Melbourne company is using the anonymized employee survey data from thousands of customers to help them learn from each other and chart what initiatives made a dent.

The eight-year-old startup has picked up a new bout of funding to help it extend its base of customers further.

Culture Amp just closed a sizable $82 million funding round led by Sequoia Capital China with participation from Sapphire Ventures, Felicis Ventures, Index Ventures, Blackbird Ventures, Hostplus, Skip Capital and Grok Ventures, Global Founders Capital and TDM Growth Partners.

The company’s Series E doubles the company’s total funding raised to date, which now sits at $158 million. Culture Amp closed its last major round of funding — a $40 million Series D — in July of last year.

The company’s subscription survey software gives customers all of the templates, questions and analytics that they need to track employee sentiment and visualize the data that they get back. The software can be used for things like quarterly engagement surveys, but it can also power performance reviews, goal-setting and self-reflections.

Employee surveys are certainly nothing revolutionary, but Culture Amp is trying to improve the process by helping its customers start to bring anonymous feedback to the team-level so that employees can give more direct feedback to their managers.

CEO Didier Elzinga tells me the company now has 2,500 customers with a collective 3 million Culture Amp employee surveys under their belts. Elzinga tells gpgmail that harnessing the collective intelligence of its network to predict things like employee turnover is perhaps one of its strongest value propositions.

“Once you understand the experience that people are having, once you know where you should focus, how do we actually help you act on it?” he tells gpgmail. “A large part is bringing to bear the collective intelligence of the thousands of companies we already have so that you can learn from people that have suffered from the same sorts of problems.”

The 400-person company’s customers include McDonald’s, Salesforce, Slack and Airbnb.


10 minutes mail – Also known by names like : 10minemail, 10minutemail, 10mins email, mail 10 minutes, 10 minute e-mail, 10min mail, 10minute email or 10 minute temporary email. 10 minute email address is a disposable temporary email that self-destructed after a 10 minutes. https://tempemail.co/– is most advanced throwaway email service that helps you avoid spam and stay safe. Try tempemail and you can view content, post comments or download something