Dragon Naturally Speaking e-Learning - Training

Friday, November 6, 2015

Making analytics work for quality improvement? Harder than it looks

'As far as actionable data, it truly does become a workflow issue'

It's one thing to say you're going to embrace data analytics; it's another to do it. And it's something else entirely to actually turn the insights derived from clinical and business intelligence into better care and lower costs.

As Director of Government Programs at Providence Health & Services, the third largest non-profit health system in the U.S., Ray Manahan has his work cut out for him. His job is to lead a team that keeps track of ever-evolving government payment programs, researching their requirements and financial impacts and communicating them to organizational leadership, from the C-suite to quality chiefs and physicians.
Educating those disparate groups about the process changes and operational adjustments necessary to meet the moving targets established by the Centers for Medicare & Medicaid services would be challenging enough for a normal-sized system. The fact that Providence has 33 hospitals across five states, means ensuring compliance across the enterprise is a challenge, to say the least.
At the Healthcare IT News Big Data and Healthcare Analytics Forum in Boston next week, Manahan will offer some insights into the work he does to help prevent Providence from paying millions of dollars in penalties for not meetingCMS-required thresholds. It's not an easy task, and in some cases, the penalties are hard to avoid.
His presentation, Ready, Set, Go: Formulating Actionable Data to Drive Value, will explain the challenges he and his team have faced as the clinical and financial data across those 33 hospitals has proliferated and grown in complexity. He'll offer tips and best practices for data collection, analysis, visualization and, crucially, the keys to communicating the lessons learned from the data to the people responsible for turning insights into positive change.
That, after all, is what "actionable data" is all about. Otherwise, it's just numbers and words and cool-looking dashboards.
The name of the game, Manahan tells Healthcare IT News, is "interpreting the data so our end-users understand what change needs to happen."
"When I say actionable data, it's simply taking these scores and putting them on a report card: Marking any given hospital within our 33-hospital system and rolling it up into a document that says you are red, green or yellow. And we need to draw attention to the appropriate stakeholders to make sure we're turning that green. Otherwise we're going to be hit with another penalty," he continues.
But of course that's easier said than done. Moving the needle on some of these CMS measures "truly does become aworkflow issue for us: how our end-users are taking care of patients to provide a better patient experience, so this lands with our clinicians," says Manahan. "It's a lot of work to improve these scores."
Three specific measures Manahan will focus on in Boston: hospital acquired conditions, value-based purchasing and hospital readmissions.
For many of those, "we're in the reds and yellows, still," he says. "But we have some greens, and we need to home in on those."
That can be a challenge, especially because all of the measures have different timelines associated with them and, sometimes, the measurement thresholds seem unclear.
To take just one example, for hospital acquired conditions, "there are two measures in there, they call them Domain 1 and Domain 2," he says. "The threshold given by CMS is that we need to get below 5 for our score."
For one thing, the performance period for one Domain 1 (Patient Safety Indicators 90 or PSI 90, a weighted average of various expected safety measures) is from July 1, 2013 to July 30, 2015. The period for Domain 2, meanwhile (catheter-associated urinary tract, central-line associated bloodstream and surgical site infections or Cauti/Clabsi/SSI) runs from Jan. 1, 2014 to Dec 31, 2015.
"Why can't we align those performance periods if they're falling under a specific measure?" asks Manahan, not unreasonably.
Further muddying the waters, he says, is the fact the weight behind the performance scores for each domain is different – 35 percent for PSI 90, and 65 percent for CAUTI/CLABSI/SSI.
Performing analytics on hospital data for these very specific indicators is one thing. But turning that insight into concrete plans for performance improvement – and communicating them to hundreds or even thousands of clinicians – is quite another.
"There's all this math," says Manahan. "Now try explaining that to a doctor when their most important priority is to provide excellent patient care."
At Providence, by the way, there are about 4,500 of those.
"To make something actionable is very, very difficult here because of the size of our enterprise," he says.
That, as much as any complicated hoop-jumping required by Washington, is one of the biggest challenges faced by the sprawling Providence system.
So one strategy, says Manahan, has been to find areas where it's possible to make use of high peformers: "There are hospitals within our system that have great teams, primarily led by quality leaders – epidemiologists who understand these programs and can help us deliver the right message based on who the stakeholder is.
"It could be a nurse, could be a physician," he adds. "It could be someone from quality who works directly with the physicians. But I think finding the right peers to drive the message and finding some of these champions is one of these things we're accentuating more and more."
One strategy Providence Health & Services is embracing is to hold a summit this fall, convening those hospitals most often in the "green" to help offer insights, lessons learned and best practices to those still muddling through the reds and yellows.
These subject matter experts, these so-called "super users," are key to getting the message across to the clinical folks on the frontlines responsible for really making some of these quality improvement changes become a reality. Many of them, after all, are likely to be skeptical of a number-cruncher telling them how to do things.
They'll usually listen to their peers, however. "We're kind of the middle men," says Manahan. "They can have that kind of dialogue. Super users are important. If you have a physician champion, you're going to be in a good spot. They're the ones that can walk the floors with the other clinicians and really help understand the need for the change."


Friday, October 16, 2015

ICD-10 Superbill: Will superbills survive ICD-10 implementation?

If you're looking for something that will make work easier as an ICD-10 compliant medical practice, you can find your most used ICD-9 codes and translate them to ICD-10.
And maybe you can take that ICD-10 preparation a step further.
During the the ICD-10 Implementation Strategies for Physicians National Provider Call by staff members at the Centers for Medicare & Medicaid Services (CMS), Daniel Duvall, medical officer for the CMS Hospital and Ambulatory Policy Group, suggested that medical practices create an ICD-10 version of the superbill.
Jen Searfoss, an attorney who represents individual and group health care providers and integrated health systems, doesn't see the superbill being preserved.
"Unless you can make the superbill really, really long," says Searfoss. "It's going to have to be one of those 11x14 sheets, nobody's going to be able to find it."
Just how long could it be? Gayl Kirkpatrick, solution sales executive for 3M HIS Consulting Services, has an example from one hospital her team consulted. "We took a two-page superbill in ICD-9 and translated that into ICD-10," Kirkpatrick said. "It became a 48-page superbill."
A less drastic example has ICD-10 inflating a superbill from two pages to nine pages. That's still a significant growth in paperwork. That should be enough to question just how useful superbills will be?
"Do you know anyone in your organization moving through a 48-page superbill?" asked Kirkpatrick.
You may need to take a second crack at it and pare down the available diagnoses. Or make them more broad. "Are you going to have a superbill that only has the high-level codes?" asks Searfoss.
Adding pages or decreasing specificity could be the medical coding equivalent of shuffling deck chairs on the Titantic. Clinicians may need a different tool. Will ICd-10 make the superbill obsolete?
Carl Natale blogs regularly at ICD10Watch.com.
Surce: http://www.govhealthit.com/blog/icd-10-superbill-will-superbills-survive-icd-10-implementation


Friday, October 9, 2015

Review: Nuance Dragon NaturallySpeaking 13

About 15 years ago at a trade exhibition I was blown away by a demonstration of Dragon NaturallySpeaking. With the style and panache of a stage conjourer the American presenter took suggestions - jokes, sayings, lines from songs - from the rapt audience and transferred them to the big screen behind him using only his voice. Then by issuing aural commands, he juggled the words into new and sometimes hilarious sentences without ever once looking around. Truly this was the future.

Except of course, it wasn't. When I finally got my hands on a copy sadly the experience was anything but magical. Slow, inaccurate and clunky in the extreme it eventually invoked the blue screen of death on my PC, at which point the future was consigned to the bin. I could only conclude that I had fallen victim to a modern version of the infamous Turk, the chess-playing "automaton" that astonished audiences in the 1820s, but which in fact concealed a highly accomplished (if uncomfortable) chess-playing human in its base. Either that or the software only answered to a Silicon Valley accent.

Now, of course, that future really has arrived. Speech recognition is commonplace on our smartphones in the shape of Siri, Google Now and Cortana, and on the PC and Mac Dragon NaturallySpeaking has now notched up 13 versions.

I tested the Premium version of NaturallySpeaking 13 on a reasonably powerful PC (Core-i7, 4GB RAM, Windows 7). However, the installation still took quite a few minutes and involved a couple of optional reboots, which was all a bit mysterious. Apparently the installer analyses the hardware and optimises the functionality accordingly, so features such as natural language commands that demand more grunt will not be installed by default on lower-powered systems. This might be the reason for the lengthy installation process.

NaturallySpeaking eventually showed up on screen in the shape of a small toolbar, the DragonBar, at the top of the screen. A start-up screen prompts the user to select a language and to read a few sentences to acclimatise NaturallySpeaking to the speaker's voice. There was an initial hiccup, however, in that the software didn't recognise the headset and mic I had plugged into a mini-jack port, instead defaulting to a USB webcam, through which it recognised about one in three words. Not an auspicious start. However, things improved markedly once I had realised what was going on and replaced the headset with a USB one. The commercial version of Premium comes with its own USB headset and mic so this shouldn't be a problem for most users.

That hiccup aside, getting started proved to be ridiculously easy. Selecting "Standard UK English" then reading out a few sentences in my neutral southern tones was all it took for Dragon to recognise correctly the vast majority of words. Whether it would struggle with a broad regional accent is another matter. A quick scan of commentary on the internet suggests that it might.

There is something of a learning curve to using NaturallySpeaking, simply because of the sheer number of commands. There is a core set of 50 or so global commands, then you have commands specific to a certain application, for example Microsoft Paint, and optionally Dragon supports natural language commands in Microsoft Office, Firefox and other applications so you can say "bold that" or "make that bold" to achieve the same result. According to Nuance, the list of applications that can use natural language commands is growing all the time, and now includes Gmail and Hotmail on all the popular web browsers. And if that's not enough you can add your own custom commands.

But the learning process is a two-way thing. While you are learning about NaturallySpeaking, NaturallySpeaking is also learning more about you. When you close the program it spends a minute or two refining your "profile" so as to increase its accuracy next time. And you can always take matters into your own hands and read Dragon a spot of Isaac Asimov ("Captain Dimitri Chandler [M2973.04.21/93.106//Mars//Space-Acad3005//*//] - or 'Dim' to his very best friends") if you want to really put it through its paces.

NaturallySpeaking is a pretty comprehensive program and I only had time to scratch the surface. It should be perfectly possible, given enough time and effort, to navigate all aspects of the PC and the internet - and even do some light programming if you fancy it - without ever having to lay hands on keyboard or mouse.

My immediate needs - and the reason I was keen to reacquaint myself with Dragon - were much simpler, specifically that I have the typing skills of a drunken ox in boxing gloves, which makes transcribing interviews - a necessary evil in this line of work - a particularly tedious task.

The Holy Grail for my particular use case would be the ability to distinguish accurately between multiple voices, for example to transcribe a meeting automatically. Sadly though, this is not yet possible. NaturallySpeaking (and, I believe, similar programs) can only cope with one voice at a time and it struggles with background noise. Professional transcribers can breathe easy for now.

The workaround is to listen to the recording and "parrot" what you hear. Much slower than a direct transcription, but quite a bit quicker and certainly much more accurate than the keyboard for a hamfisted typist like me.

Nuance claims 99 per cent accuracy out of the box, which seems a rather bold claim. While dictating from a recording, an error rate of one word in 20, or 95 per cent, would seem to be closer to the truth.
Unlike Siri and similar, which process input in the cloud, NaturallySpeaking does all its crunching locally, which means that whatever you say appears on screen pretty much immediately. This is very helpful when dictating because when mistakes are made you can be ready for them and go back and correct them quickly. I made a lot of mistakes early on, while getting used to the commands. Things improved reasonably quickly but there will always be ambiguities and misunderstandings.

NaturallySpeaking generally does a good job of choosing the correct homonym according to context, but it does sometimes make mistakes, for example selecting "right" instead of "write". You also have to train it to use specialised words. I found it learned easy ones like Hadoop first time, but after many attempts I still failed to get it to output "Azure" instead of "as your".

When transcribing recordings I found it quicker and easier to correct mistakes using a keyboard, but in time and with practice there is no reason why you couldn't do everything via voice commands. However, I struggled to navigate the web using NaturallySpeaking. On pages where links exist as hypertext there is no problem: just say "click XYZ" and off you go. But with the checkboxes in Yahoo! Mail for example, or in websites where links hide behind graphics you can spend a long time shouting at your computer to no avail whereas a simple mouse click does the job in a couple of seconds.

NaturallySpeaking comes in a number of different versions (link, PDF), including specialist editions for the legal and healthcare markets. The Premium edition tested includes functionality such as full text control for Excel and PowerPoint, the ability to playback your speech in documents, transcription from approved digital recorders and other features that are not available on the Home Edition. However, for my purposes - parroting interviews from recordings - the functionality of the latter - which is about £55 cheaper at £71.99, would have been quite adequate.

Overall, NaturallySpeaking is an impressive package. I was expecting a much lengthier training process, but you really can get going pretty much immediately. The training and help files are genuinely useful and easy to follow, and adding new vocabulary and other customisations is straightforward.

Now, if they could rise to the challenge of transcribing multiple voices that really would be something to speak home about


Thursday, October 1, 2015

Breach Tally: HIPAA Omnibus' Impact

It's been two years since enforcement of the HIPAA Omnibus Rule's modified breach notification requirements began. But the most significant changes on the federal tally of major health data breaches since then appear to have more to do with a surge in hacker activity than the new requirements under HIPAA Omnibus.
As of Sept. 29, the Department of Health and Human Services' Office of Civil Rights' "wall of shame" website listing breaches impacting 500 or more individuals shows 1,338 breaches affecting a total of 154 million individuals since September 2009.
When the breach notification rule was modified under HIPAA Omnibus, some experts predicted the number of breaches reported would surge because of its new, more objective, requirements. Indeed, there was as surge in the first year after enforcement began. But since then, growth in the number of breaches reported has substantially leveled off. Significantly, however, a handful of recent mega-breaches involving hackers have affected many millions of victims.
The total number of breaches on the tally has nearly doubled since Sept. 23, 2013, when HIPAA Omnibus enforcement kicked in, but the number of individuals affected is up almost five-fold (see After HIPAA Omnibus, Breach Tally Spikes). In the last 12 months, however, the total number of breaches grew by only about 19 percent, but the total number of individuals tripled, largely due the hacker attacks.
The 10 largest breaches in 2015 have all involved hackers, affecting a total of 111.2 million individuals. Of those, the top five breaches alone affected more than 108 million individuals, including the cyberattack on Anthem Inc., which affected about 79 million, and the hacker attack on Premera Blue Cross, which impacted 11 million.
The third largest breach since 2009 was added to the list just this month: The cyberattack onExcellus BlueCross BlueShield that was revealed by the health plan earlier this month, which affected 10 million individuals.

Top Five HIPAA Breaches So Far in 2015

Analyzing the Trends

"We are seeing more big data breaches mainly because they are happening more often as cybercriminals recognize the commercial value of the data," notes privacy and security expert Kate Borten, founder of the consulting firm The Marblehead Group. "While clarifying the breach determination process is likely to have resulted in more reported breaches, the fact is that there continue to be many more small and midsize breaches than large ones."
Under the modified breach notification rule, security incidents are now presumed to be reportable breaches unless organizations demonstrate through a four-factor assessment that risks of compromise to protected health information is low. Prior to the rule modifications, reportable incidents were determined more subjectively, based on whether the incident was likely to cause an individual reputational, financial or other harm.
"In my own experience dealing with clients, people are taking the [modified breach notification rule] seriously," says privacy attorney Kirk Nahra of the law firm Wiley Rein. "But what's less clear is whether what's being reported would've been reported anyway. Overall, I don't think it's made much impact. We're still seeing plenty of modest-sized breaches, but the most significant breaches we're seeing now have been due to hackers."

Breaches Tied to Mistakes Continue

Despite tens of millions of individuals being affected by fewer than a dozen of the 200-plus breaches that have been added to the Wall of Shame within the last 12 months, more incidents involving mistakes by organizations are still showing up on the tally, says privacy attorney David Holtzman, vice president of compliance at security consulting firm CynergisTek.
"What is surprising to me is that we are not seeing overall reductions in the gross numbers of reportable breaches due to theft and loss of [unencrypted] media and devices," he says. "With the increased attention, awareness and availability of user-friendly, affordableencryption solutions, these types of breaches are eminently preventable. Yet, they continue to be occur at an alarming rate."
Nevertheless, some experts predict that a relatively small number of new mega-hacking incidents will continue to account for the majority of breach victims in the months and years ahead.
"We expect more hacking attacks to be reported during the remainder of 2015 and well into 2016," says Dan Berger, CEO of security consulting firm Redspin.
Holtzman agrees with that prediction. "Indications are that organizations with large networks associated with health insurers are performing retrospective forensic audits in which they are discovering that their systems had been infiltrated months earlier," he notes. "I expect to see additional reporting of these incidents as they are discovered." That was the case with both the Excellus and CareFirst Blue Cross Blue Shield breaches. In both situations, the health plans belatedly discovered they too were victims of cyberattacks after hiring a third-party to perform a forensic review of their systems following the hacker attack on Anthem.

Preventive Measures

Healthcare entities and business associates can take a number of steps to improve breach prevention, experts say.
"To combat this, first acknowledge the problem: Healthcare organizations currently underspend on security," Berger says. "Those days are over. We recommend looking beyond the HIPAA security risk assessment to more direct security testing, such as penetration testing and social engineering."
Holtzman emphasizes that health systems "must do a better job of protecting the enterprise, hardening their systems, enhancing detection capabilities of networks, testing application environments and increasing the education of its workforce."
Breach detection and reporting is still a weak area for many entities, Borten contends. "I believe many, if not most, breaches are still going undetected," she says. "And in spite of the HIPAA Omnibus Rule clarification on breach determination, some organizations continue to misinterpret security and privacy incidents and underreport."


Friday, September 25, 2015

Because we can’t be Ferris Bueller all the time: Using speech to stay productive at school

In his own strange way, iconic movie character Ferris Bueller mastered the art of productivity, albeit outside of school. Instead of all his running around the city of Chicago on his “day off,” imagine how productive he could have been back in class with tools like speech recognition to complete his day full of missed assignments.

Nearly 30 (!) years ago, Ferris Bueller showed us how to make the most of playing hooky. Being engrossed in ways to make individuals more productive, we can’t help but tip our hats to his ability to cram tons of “work” in on his day off, from a trip to the Sears tower and swindling a fancy lunch to touring the Art Institute of Chicago and stealing the show at the Von Steuben Day Parade.

Ferris may not have put a premium on attending class, but as another school year kicks off, there are plenty of students focused on specific goals and aspirations for success. For these students, productivity, efficiency, and time management are all vital parts of the education equation.

Today, students have a considerable arsenal of technology, apps, and other tools at their disposal for completing schoolwork, including speech recognition. Much like professionals who utilize speech to complete time-consuming documentation and reporting requirements, students – particularly those in high school, college, and at the graduate school level – experience the same benefits when completing homework assignments, papers, study guides, essays, and theses.

All too often, writing assignments can seem daunting, when you’re staring at a blank page and a requirement of 1,000+ words. How do you begin? What should that first sentence say? What point(s) are you really trying to get across? As a graduate student myself, I can attest to the fact that the opening stages of any writing assignment are typically the most difficult.

Speech recognition actually proves to be a valuable tool at virtually any stage of the writing process. For those in the ideation stage, dictating thoughts can unleash creativity to help you get started finding a rhythm, so the words tend to flow much easier, ultimately allowing you to complete the full assignment much faster.

For students who have an idea of what they want to write, but can’t quite find the right words, simply “talking it out” removes those barriers of expression. How often do we quietly read certain sentences and passages out loud to ensure that they are articulated correctly? We speak to ensure that what we have written sounds natural. Using speech recognition, this practice can be applied to the entire assignment.

Editing with speech recognition is simple – by selecting individual words and whole sentences that need correction, you can make those final edits through the simple spoken word. Students not only complete the assignment, but they increase their productivity by simplifying the process from start to finish and saving time.

Beyond standard dictation, speech recognition offers students other benefits like accurate and fast transcription. So, with your teacher or professor’s permission you can record a lecture (make sure you place your recorder close to whomever is speaking for greater accuracy) and transcribe it in minutes. Why painstakingly search through your chicken-scratch notes when you can easily search through a full digital transcript of the lecture? This is a very useful tactic for things like test preparation and reference citations.

When you also factor in features like simple voice searches for research papers (e.g. “Search Wikipedia for the history of Chicago”) and transcribed text read-back, it becomes even clearer that speech recognition boasts a full package of productivity and time-saving benefits for students.

Living out Ferris Bueller moments is a fun prospect, but when it comes time to get down to work, using tools at their disposal – including speech recognition– that weren’t available just 10 or 15 years ago can help students stay productive, ease the burden of time-consuming writing assignments, and increase their knowledge across a wider spectrum of topics. And who knows, this may leave them time to have some fun (probably something a bit more tame than anything Ferris cooked up) on their days off


Wednesday, September 25, 2013

Whitepaper showcases how technology impacts profitability

A new whitepaper from language training provider Speexx examining current e-learning and talent management trends offers practical advice for reaping the full benefits of technology. 

The Speexx report analyses five core elements: What organisations currently gain from e-learning, the lack of global e-enablement, moving towards the cloud, the mobile and social learning take up and the link between capability development and communication.

Based on findings from the Speexx Exchange 2012-13 Survey, which involved 230 organisations, the whitepaper highlights the trends shaping the implementation of learning technologies in the workplace and the impact they are likely to have on business efficiency, profitability and growth.

According to Speexx, organisations using a cloud-based LMS are paving the way to meet the requirements of the global market in terms of communication, leadership and expansion. Although 80 percent of organisations aim to use cloud-based LMS by 2015, only 18 percent have actually moved towards the cloud so far. Another key finding highlights that, while 63 per cent of organisations already have a BYOD policy in the workplace, less than a third of these actively use mobile technology for learning purposes. This highlights a signficant gap between the opportunities offered by mobile and its actual usage.

Armin Hopp, founder and president of Speexx, said: "The reality is that at this rate, it will not be feasible for the majority who are still operating in local silos to reach their 2015 target in time - some organisations have up to forty different structures within one company. 

"The pressure is clearly on businesses to find new and innovative ways to work smarter, upskill staff and gain a competitive edge in the global market. This leads to the ability to instil a coporate culture of open communication and the ability to embrace new technologies."

Labels: , , , ,

Sabio: Don’t overlook importance of ongoing training

As children across the country head back to school this week, technology specialist Sabio has called on customer service organisations not to overlook the importance of ongoing education and training across their contact centre operations. 

With traditional classroom-based training often getting in the way of contact centre productivity, Sabio believes that self-paced training - delivered via an online Virtual Campus or Computer-Based Training (CBT) - can provide the best practice skills contact centre staff need to succeed.

To address this, Sabio Training's Online Course Finder now offers a portfolio of dedicated contact centre training courses. It features almost 200 specialist courses ranging across key technologies - from Avaya, Verint, VMware and Sabio's own solutions - to core customer service and team leadership skills for agents and supervisors. Sabio also offers specific training for different customer service centre functions, from agents and operators to supervisors, administrators, installation specialists, support staff and technology experts.

"With today's increased focus on delivering a high quality customer experience across different channels, it's essential for organisations to make it as easy as possible for customer service staff to perform to the highest levels," said Dan Christmas, Sabio's head of training. 

"Research shows that effective training improves retention, increases staff engagement and can lead directly to improved performance. However, traditional classroom-based training methods can often be unsuitable for co-ordinated and cost-effective contact centre education.

"Over the last three months particularly, we've seen a growing requirement for self-paced training, with customer service organisations recognising that it's sometimes not practical to train all staff at the same time whenever they introduce new technology into the contact centre." 

He added: "Classroom training can help during the early phases of a technology's deployment, but all too often we see in-house skills diminish as key staff leave or new initiatives take precedence. By offering site-specific CBT or Virtual Campus training, we can provide organisations with flexible training programmes that provide exactly the level of role-specific skills needed to support both technology and soft skills across the whole solutions lifecycle."

Labels: , , , ,