Charlie Provenzano

Why Your Healthcare Organization Needs to Catch FHIR Part III

How Do I Comply With All of These New Regulations? - It’s All in the Use Cases

We’ve all had a couple of months now to review the new rules from HHS. While there may be some tweaks that come from the official review period, date changes, etc., the main points are finalized. Payers and providers will be required to share patient data, and HL7 FHIR will be the standard through which it’s accomplished. While the regulations and inherent expectations are clear for most, the actual implementation details, the implications for your own organization, and the questions you need to ask yourself, are probably a lot less clear-cut. For example:

How will I identify patients?

  • If I have a patient ID and I’m querying a payer, should I be able to use that ID with a date range?

  • If I’m querying a payer and I don’t have an ID?

  • If I’m querying a provider and I don’t have an MRN?

  • Do I have a patient match service as part of my FHIR implementation?

Which version of FHIR do I need to support?

  • New regulations demand STU2 (Standard for Trial Use), but comments are urging STU4

  • What if providers in my network use EHRs with STU2? Can I still support STU4?

How will patient consent be managed?

  • Do I need direct consent from the patient?

  • Can I trust consent has been given to the requestor?

  • How will I confirm consent for all types of data being requested?

What data sources do I need to become FHIR-enabled? And for what types of requests?

  • If I’m a payer, do I only need to provide data I’ve received via claims?

  • What data will we consider ours vs. the patient’s?

  • How do I describe the provenance of data at my institution that was shared by other institutions or providers?

The members of the Da Vinci Project have been wrestling with these issues since 2017. Since Da Vinci is made up of private industry participants from both providers and payers, as well as participants from CMS, the Da Vinci use cases are already aimed squarely at these questions. The answers you need will likely be found in the implementation guides for these use cases as they are developed, and in the accompanying reference implementation projects. Let’s talk about applying these resources to your own work in FHIR.

What’s a use case?

For the Da Vinci Project, the use cases are all around value-based care workflows. The members focused on high volume manual activities that would benefit from full or partial automation. Specific to the new regulations, Da Vinci is actively developing two uses cases for eHealth Record Exchange, Payer Data Exchange (PDex) and Clinical Data Exchange (CDex). Detailed descriptions of all of the Da Vinci use cases can be found here: http://www.hl7.org/about/davinci/use-cases.cfm 

A use case defines a problem at a high level. Within each use case are defined scenarios. Each scenario is a User Story in the Agile development sense. Each scenario describes a workflow involving specific end-user characters who each play a part. These scenarios strive to be as realistic as possible. For example, here is a draft workflow from one of the current PDex use case scenarios:

Patient:

Lauren Dent is a 62-year-old female, living in Wisconsin but she spends winters in Tampa Bay, FL.

Lauren works on a seasonal basis and has just accepted a new position with her employer and has moved to a larger town in Wisconsin to live with her daughter. As a result of the move she has selected a new primary care provider.

  • Lauren is in reasonable health but is managing a number of conditions.

  • She has been diagnosed as pre-diabetic and is being treated with medications.

  • She s taking medication for hypertension.

  • She had a knee replacement five years ago.

  • She had a procedure seven years ago to correct a problem with a disc in her lower back.

  • A history of a normal colonoscopy five years earlier.

  • A history of a pneumovax and zostavax four years earlier.

Physician:

Dr. Jillian is an internist

In this scenario, Lauren has moved into the community and wishes to establish care with Dr. Jillian. The goal of the scenario is to describe the process by which Dr. Jillian receives available clinical data from Lauren's insurer.

What’s a Reference Implementation?

For each use case, a reference implementation guide is created. The reference implementation contains code and related artifacts that address the scenario. It’s intended to be a source of information and a model for developers who are trying to implement the use case in their own systems. It also can be used to demonstrate the FHIR interactions described in the guide.  All of this code is public, so you can feel free to copy it, incorporate it into your own projects, or just use it as a learning tool.  

For the above scenario (for which HealthLX built the reference implementation) the reference code includes: a SMART on FHIR app (to gather information for a CDS hooks call); code to launch a CDS hook and retrieve a CDS information card; and a second SMART on FHIR app to display the data returned and allow that data to be selectively written back to the EMR. (The code also had to support STU2, STU3, and STU4 on the EMR end, and to write either individual FHIR resources, or a document bundle if those resources were not supported for write by the EMR.) All of this code is available in a GitHub repository as both raw code and as Docker images. 

The next step for a reference implementation are the FHIR Connectathon events put on by HL7.org at its quarterly meetings. The Connectathons are a chance for other HL7 members to explore the reference implementation and give feedback on the guide or the active use cases. Most often, a reference implementation is improved and enhanced over a series of meetings until the members have agreed that the use case is ready for balloting by HL7 as a whole. HL7 balloting occurs three time a year, and once the artifact is passed it becomes part of a STU. The public Da Vinci Project implementation guides (in their various states) are available on the Da Vinci public Confluence page.

 Now What?

As clear as the new HHS rules are, becoming compliant with them is easier said than done. The Da Vinci Project’s value will become increasingly apparent as organizations wrestle with the questions outlined above. Da Vinci is doing the work most healthcare organizations need to do, but don’t have the time or resources to devote to it. And, Da Vinci’s collaborative aspect ensures high levels of relevancy and quality in the use cases and implementation guides. All of this work is being done on behalf of the entire industry, so don’t feel like you have to go it alone when it comes time for your own implementation. Better yet, if you feel you have the cycles and the expertise, and you want to influence the evolution of the FHIR standard, inquire about your company joining the Da Vinci Project as a paying member.

Want to review this series in depth? View the previous posts:

Part I

Part II

Why Your Healthcare Organization Needs to Catch FHIR Part II

Unlike some technologies that have been talked about everywhere but seldom implemented (blockchain?), FHIR is finding widespread adoption both in systems and in policy. This month, the U.S. Department of Health and Human Services (HHS) proposed new rules to support interoperability and in the process endorsed new rules from CMS and ONC, both of which are built around the FHIR standard. There’s plenty of new reading (724 pages for the HHS rule alone), but how can you apply it to your own institutions and your own systems? What are your institution’s FHIR priorities?

Value-Based Care and the Da Vinci Project

If part of your institution’s mission involves value-based care, the chances are pretty good that you will be focusing on interoperability outside the four walls, and that the Da Vinci Project use cases will be of primary interest. The Da Vinci Project is a private sector initiative (full disclosure, HealthLX is a founding member) tasked with addressing the needs of the value-based care community using the FHIR standard. The goal of the Da Vinci Project is to help payers and providers to positively impact clinical, quality, cost and care management outcomes.

At HIMSS19, members of the Da Vinci Project, successfully demonstrated use cases involving data sharing between payers and providers using FHIR endpoints. These use cases were modified in each demonstration to use different EHRs and different payer systems in order to demonstrate that the same FHIR interfaces can be applied across the industry when the standards are followed. If payer/provider interoperability is one of your priorities, you should pay close attention to the Da Vinci Project use cases as they develop. The use cases can be found here: http://www.hl7.org/about/davinci/.

US Core - The Fundamental Elements of Exchange

Maybe your priorities are not focused around a particular set of use cases.  Perhaps you’re building a new system or are updating an existing system and you just want to be able to share that data using the FHIR standard. Just about every current FHIR implementation is built around some portion of a set of resources referred to as US Core. The US Core implementation guide defines the minimum requirements for accessing data via FHIR and is intended to be the foundation for all US-realm FHIR implementation guides. (It’s also at the center of the new HHS proposed rule.) Whether you are building out your own FHIR server, building FHIR workflows, or working with a vendor, if your solution covers the US Core endpoints you’re off to a good start.

The US Core profiles cover obvious elements such as allergies, conditions, procedures, medications, and results. (For a full list of profiles, visit: http://www.hl7.org/fhir/us/core/.) Each profile specifies the minimum required elements to request the profile, as well as minimum elements that must be returned. For example, AllergyIntolerance has “Patient ID” as the one required argument, but must return: a status of the allergy, a verification status, a code which tells you what the patient is allergic to, and a patient ID. As with most things produced by HL7.org, the documentation is excellent.

Internal vs. External Servers

Most of the use cases proposed by the various agencies understandably focus on interoperability between healthcare entities, but FHIR can be an effective tool for data exchange within your organization as well.  If you work in a large enterprise, it’s not uncommon for your IT programmers to spend hundreds of hours a year integrating internal systems via proprietary APIs.  What if all of your systems used the same interfaces for the same types of data?  Interop programming would become a much simpler endeavor.  So then, it’s quite possible in a large or complicated enterprise that you may want to have more than one FHIR server.  Perhaps you will want a server for external inquiries, and a second or third for internal data integrations. 

Let’s take allergies as an example again.  Suppose that a particular payer institution has two sources for allergy data: a care management system where that data is collected by care management (CM) nurses, and a data warehouse that collects and manages that data from an HIE and other internal and external sources. For an external query, you would want to provide allergy data from the warehouse because it’s going to contain data on more members, not just those in the CM database.  But you probably also want an internal FHIR server which can make the CM allergy data available to the warehouse, because it’s a direct and reliable source.  In fact, for internal queries, you will probably want to ping that source before the warehouse.  The FHIR resources and call syntax are the same, but the server address, the purpose, source and destination are all different.

Build vs. Partner

The technical aspects of FHIR are not too difficult to comprehend, and the decision to create a RESTful API was made in part in order to use a technology that is already commonplace. Therefore, finding trained resources who understand the technologies involved shouldn’t be a difficult task. If your environment is simple, or you have an established team for interoperability programming that is up to the task, all of the tools are publicly available for you get started on your own.

However, consider a partner if any of the following statements are true:

  • You are resource or time constrained but FHIR is still a high priority

  • Your IT environment is large and complex (i.e., multiple clusters, a mix of new and legacy, purchased and home-grown systems)

  • You have multiple sources for the same data types

  • You have internal AND external interoperability challenges 

If any of the above describes your environment, you probably want to consider partnering with a firm to envision and execute your overall FHIR strategy.

As you develop your own plans around FHIR, ask yourself these questions:

  • What are your use cases?

    • Does your plan conform to new and impending regulations?

    • Do you need internally and externally facing FHIR servers?

    • Do you need to consider value-based care (Da Vinci Project) use cases?

  • What are your most trusted sources of data for sharing externally?

  • Do you need to partner, or can you build this with existing resources?

As I mentioned, there is plenty of reading material out there.  I’ve included some primary sources in the links above. I hope this post has helped you focus your research into FHIR and illuminated for you some of the considerations for this type of project. Click here to continue reading and check out Part III.

In case you missed it, check out Part I.

Why Your Healthcare Organization Needs to Catch FHIR

For many in healthcare IT, FHIR might be viewed as a radical shift in interoperability standards, “yet another standard” that requires retooling of multiple systems.  For those institutions and individuals who have been working in the healthcare interoperability space for years (and for some of us decades) it can be viewed as a natural and welcome evolution of the standards, and an evolution that brings with it many benefits we have sought for years.

The Medium is the Message

First attempts at interoperability between systems involved parsing messages.  Message standards allow data sharing in its most basic form, the only drawbacks being timeliness (messages must be published and “listened” to so real-time integration was elusive at best), and customization of messages. Standardized message formats like HL7 (and I include document specifications in this definition) are still the most popular method of data sharing between systems, despite these drawbacks.

Standardization of messages though, even with a specification as well designed as HL7v3, has its limits.  Z-segments, designed to allow customization of the standards, are very commonly used to carry data that is actually covered elsewhere in the specification. Re-use of “unused” segments is also common.  These types of customizations are harmless for interoperability within the four walls of an IT organization. In this context, customizations can be standardized across all systems that will consume the messages.  It can be messy but it’s manageable. However, this breaks down the minute those messages need to be shared outside of the originating IT organization.

And then sometimes the four walls move.  As designers, we tend to think of our own IT organization as the center of the integration universe.  Due to that fact, few integration designers have ever thoughtfully considered the possibility of a merger of institutions when architecting their message-based solutions. Even when we do consider it, we tend to assume that “our” standards will prevail. The result is usually a tangle of confused standards that has to be continually re-scripted as the systems evolve.  If you believe that your own IT organization is slow to roll out upgrades, this could well be one of the main causes.

APIs and the Quest for “Real-Time”

I always feel like “real-time” should be in quotes because it is defined contextually.  Usually though, it refers to request-based as opposed to event-based integration. For illustration purposes, we can generalize that messages are generated based on events, an admission, discharge, order, etc. Other systems can listen for these events and capture the messages, or they can filter through messages historically and look for particular events. A real-time integration is usually request-based.  One system requests data from another system as needed.

In the pursuit of more real-time integration, in the last 10 or so years, API (Application Programming Interface) based integrations have become more popular.1  The decision by some large EMR vendors to open up their system APIs to outside consumers has provided more meaningful and actionable integrations for the end users. Simultaneously, it also provides stickiness for the associated EMR vendor.

Vendor-specific API integrations provide the greatest flexibility and leverage of existing systems.  However, they are also expensive to create and maintain. Part of that expense comes from the skill set required to build those integrations in the first place.  Where message-based integrations may often be built and maintained through configuration of a tool designed for message handling (i.e., an enterprise service bus), potentially combined with some form of scripting, API integrations require a coder’s skill set which is often more expensive. Additionally, each vendor’s API will have its own structure for calls to that API.  That is, the code required to query a system for something like patient allergies can be completely different for each system. It may require different arguments, multiple lookup calls, different security protocols, etc. The fundamental architectures could also differ (SOAP vs. RESTful). In addition, some APIs may be accessible from the web, and others not.

FHIR Evolution

FHIR (Fast Healthcare Interoperability Resources) aims to bring together the best aspects of message-based and API-based interoperability into one standardized API. As such, FHIR combines the standardization of an HL7 message with the real-time request-based structure of an API. From the HL7.org/FHIR website:

“(FHIR) is designed to enable information exchange to support the provision of healthcare in a wide variety of settings. The specification builds on and adapts modern, widely used RESTful practices to enable the provision of integrated healthcare across a wide range of teams and organizations.

The intended scope of FHIR is broad, (and) … is intended for global use and in a wide variety of architectures and scenarios.”

FHIR is:

●       An API addressable via http protocol, i.e., a web-based API

●       A standardized API built around a set of defined resources backed by HL7.org

●       A flexible API in that resources are standardized, but the returned data is customizable based on the request

This allows developers/integrators to build applications based on data from disparate applications using the same API calls throughout.  FHIR improves on message-based interoperability by providing real-time request-based access that is backed by standards. It improves on proprietary API implementations in that the structure of the calls is standardized for each resource. This level of standardization allows a single developer to build an application that can pull data from numerous discrete sources.  The sources themselves can be from across the institution or from across the world.

Perhaps the best illustration of this concept is the most prevalent real-world example, the Apple Health app. Apple Health is an app for the iPhone which recently added the ability (in beta test) to hold all of a user’s medical records.  The records are gathered from participating institutions using the FHIR standard. (Apple Health uses Draft Standard for Trial Use #2 of FHIR.  The current version is DSTU3, with DSTU4 destined to be the first production ready version.) Included resources are: allergies, conditions, immunizations, lab results, medications, procedures, and vitals. Because of the FHIR standard, information can be collected from multiple institutions without having to write code that is institution specific. Within my own record, I can view procedures performed in multiple locations and institutions within the same view, and I can trace those records back to their originating source.  A standardized API makes all of that possible.

That same standardization can work for integrators inside of their own IT walls in ways that are just as impactful.  Most institutions have a combination of vendor and home-grown systems spanning any number of technologies from the last two decades.  Imagine if building an application that draws data from those systems could be as easy as creating a web page. If all of those systems provide FHIR end-points (and also participate in the same security paradigm), it can be just that easy.  With a standard as flexible and powerful as FHIR, we are likely to see rapid progress in its adoption. In fact, with Apple and its participating healthcare providers, it’s already happening on a national scale. This adoption will likely be limited more by each institutions ability to publish FHIR interfaces than by any other factor.

What’s Next?

FHIR is revolutionary in our journey to more patient-centered, data-driven healthcare in the U.S. It has reached a tipping point with its adoption by 82% of hospitals, the ten largest electronic health vendors, the Centers for Medicare and Medicaid Services, and a majority of clinicians.

Check back for Part II in this three-part blog series where we take a deeper dive into FHIR and the challenges it poses for healthcare organizations who are slow or roadblocked to change. In Part III, we’ll share implementation and adoption strategies you can use to put FHIR to work for your organization.

1.      Roy Thomas Fielding, the originator of the REST standard describes an API this way:            

“A library-based API provides a set of code entry points and associated symbol/ parameter sets so that a programmer can use someone else’s code to do the dirty work of maintaining the actual interface between like systems, provided that the programmer obeys the architectural and language restrictions that come with that code.” (Source.)

An API can be thought of as a set of code objects addressable by other systems through a programmed interaction.  In some cases, vendors have chosen to make public the very objects that comprise the fundamentals of their systems, i.e., they publish the same objects used by their own engineers.  In other cases, they have provided a specific API that is only used by third-party applications. Both methods provide the same benefits to end users.

An API-based integration is preferable for real-time integrations because the data may be requested from the API as needed rather than listened for as with a message.  This is request-based integration as opposed to event-based. This allows other systems to call on data as needed and in real time. For example, an order system in a walk-in clinic may want to request allergy information at the time the order is created from a local hospital that treats many of the same patients.  Some vendors have gone as far as to include hooks for integrators to insert their own custom workflows and screens. For example, a user may wish to view notes from an outside system when updating orders in the primary system.

About HealthLX

HealthLX is a company born from years of integration experience and is completely dedicated to healthcare interoperability.  The architecture of HealthLX is highly scalable and extensible, bringing HealthLX dataflows to a potential myriad of devices and systems. HealthLX – FHIR Starter™ can place FHIR compliant interfaces in front of your legacy and homegrown systems

By exploiting our architecture, specialized data flows can be built to share information between care management platforms, payer systems, provider EMR systems, mobile applications, big data repositories, and more.