Scheduling and Optimization of Biomanufacturing Activities in Multi-product Facilities with Disposable Technologies
Posted on July 20, 2018
The production of biopharmaceutical products relies on the successful execution of complex production tasks. Multiple factors ranging from availability of raw materials and trained personnel to adequate facilities are essential. However, without a systematic and well-orchestrated business process and adequate tools, biomanufacturers risk down-time, costly deviations, and, most significantly, potential loss of product. One aspect that is frequently overlooked -or underestimated- is the availability of appropriate technology tools for detailed planning and scheduling of activities at the manufacturing floor.
Traditionally, planning and scheduling at the various levels -Sales and Operations (S&OP), Manufacturing Production (MPS), and floor operations- are performed with multiple spreadsheets which, as a facility grows in complexity, evolve from simple tables to complex interconnected workbooks supported by macros. Calculation performance, security, file integrity, and limited reporting and sharing capabilities are some of the main issues that result from the use of spreadsheets for planning and scheduling. Figure 1 presents a summary report of typical activities for the production of biologics products. It is evident that a high level of coordination is required to successfully anticipate raw material needs, prepare equipment, and execute the manufacturing operations and parallel processes. One area that is often underestimated is the production of buffer solutions for downstream purification. These preparations are typically dispensed in multiple aliquots from a single batch that are allocated to the various chromatography steps for one or multiple product lines. In order to minimize the number of buffer preparations while ensuring that an adequate supply of solutions is available for the process, schedulers rely on optimization techniques that are complex to implement and execute, and that can provide inaccurate results if the constraints are missed or not correctly incorporated into the models.
This article discusses the preparatory work, vendor selection, and project methodologies applied to the implementation of a comprehensive software system for finite scheduling of biologics manufacturing at a multi-product facility that operates with single use equipment.
One of the key success factors involves the creation (if one does not exist) or refinement of the comprehensive business process that describes the planning and production steps. The example in Figure 2 shows the planning activities performed by various functional areas, the systems in use, and the connections, from demand planning through production and batch release. An important benefit from the creation of this map is the visualization of dependencies and outcomes from each step. It aids in defining the scope of transactions and activities to be facilitated by a finite scheduling application, the inputs and outputs, and the functional area responsible for its implementation and maintenance.
The schematic presented in Figure 3 shows the key steps and content of the Front-End-Study (FES) methodology which, when properly organized and executed, provides excellent context for technical discussions related to the system and related technology, vendor selection steps, project planning through the ‘go-live’ process, and change management.
As mentioned above, the business process map provides the scope of the finite scheduling system and helps to identify the key users and outcomes. That context frames the discussions related to the system user requirements, functionality, any potential interfaces with other systems such as calibration and maintenance management, Enterprise Resource Planning (ERP), as well as the expected reporting capabilities.
In addition to a clear business process, a significant factor in the success of a finite scheduling application is the selection of the appropriate software system, and consequently, the ‘right vendor.’ On many occasions biomanufacturers rely exclusively on standard vendor marketing software demos to make their decisions regarding the purchase of a system. Although this may appeal as a time-saving strategy, it does not address actual needs or show proof of required functionality prior to purchase. The methodology illustrated in Figure 3 includes a more detailed vendor selection step, where a brief but descriptive script of the process to be modelled is used for demonstration purposes. Additional recommended steps include checking the potential vendor’s references for the specific application of their software as a tool for finite scheduling. In summary, the software selected must comply with key requirements: 1) commercial, off-the-shelf application, 2) requires minimal or no customizations, 3) successfully demonstrated use at other biomanufacturing plants.
The completion of the Front-End-Study represents a significant milestone. Actual outcomes of this exercise include the documented User Requirements Specification (URS), a cost estimate to inform the business case, a project timeline, and the recommended phase approach.
Implementation: Project Kickoff
The start of project implementation is marked with the kickoff meeting, where project team members, consultants, and stakeholders are introduced. The purpose of the kickoff meeting is to provide the team with an overview of the expectations, and to generate enthusiasm for the project. Topics for discussion are project background, scope, phases, timelines, team organization, deliverables, and project goals. A thorough front-end study pays off here, since business processes and user requirements are already well documented, process templates and data are compiled, and major decisions are already made at this point.
Once the kick-off meeting is complete a series of workshops set the stage for a successful project. Consultants will require an in-depth overview of the business process, manufacturing process, and facility constraints. This is achieved by using the detailed business process maps, manufacturing process maps, and process templates that were generated as part of the front-end study. A tour of the manufacturing facility, if possible, can put the process overview into context. It is important to involve the schedulers in these workshops, as they can describe any scheduling challenges to the consultant. These discussions should also include reporting requirements.
It is also necessary for consultants to provide end user training to the schedulers and modelers. IT infrastructure and software should be installed prior to the kickoff, so that users can have a hands-on training experience. If the modeling and scheduling roles will be different users, it is important that both the scheduler and modeler receive both trainings as this will give both users a greater perspective and help drive a better end result.
Proof of Concept (POC)
The team must agree on a representative manufacturing process to configure as a proof of concept model. The process templates are used as a data source for the consultant to build the proof of concept model. The same methodology that is used to build the proof of concept model is used as a basis to build the models of the remaining product lines.
While developing the POC model, the consultant is still becoming familiar with the manufacturing process and facility and the end users are learning the software. Frequent review and feedback by the schedulers and modelers is necessary to ensure the manufacturing facility and process is correctly represented in the POC model. It is critical that the consultant has previous experience in configuring manufacturing models for the biomanufacturing industry. An experienced consultant can make viable suggestions during POC development based on experiences at other biomanufacturing companies.
The POC model begins with the configuration of the plant equipment and equipment data, such as buffer preparation vessels’ minimum and maximum volumes. Once the equipment has been loaded into the model the processing activities can be configured. Initially, only the main processing activities are modeled to ensure that the correct equipment is used for the accurate duration. When the schedulers approve the model of the main process activities, another layer is added to the model: solution aliquot requirements.
Many of the main manufacturing process activities have requirements for media or buffer aliquots varying in make-up and/or volume. The aliquot requirements for each main processing activity are specified in the process template, and should align with the bill of materials in ERP. Multiple aliquots can be combined into a single solution preparation activity to maximize capacity in the buffer and media preparation area. Schedulers’ and modelers’ involvement in development of these optimization steps is critical to the POC model. Not only does this result in an improved model, but it can provide additional opportunities for hands-on training.
Once complete, the proof of concept model becomes the basis for the configuration of the remaining manufacturing process models. Master data can be manually loaded into the software using built in interfaces; however it is not practical or efficient to configure multiple manufacturing processes in this manner. As the proof of concept is being developed, a method to manage data and load remaining process models can be developed in parallel. This method, or the load file, can be used to efficiently and consistently build models.
Data Management for Model Configuration
The usefulness of finite scheduling relies on robust data and accurate model configuration. The source of master data for the scheduling model is the set of process templates for manufacturing operations in the facility. While the process templates contain the process information, this data must then be organized in a way that allows for data management in the model. A solution is to create a set of load files, which are databases that can be imported into the software.
Data is organized by plant location. First, a plant has master data which is not product specific. Equipment data includes tag, resource group, volume and line loss data, process duration, and room location. Solution data includes material numbers, descriptions, document numbers, and expiries. Media and buffer changeover durations and clean hold expiries are also defined.
A plant also has product-specific master data, including area, activities, their duration, and equipment alias. Each product also has solution requirements, including solutions, their aliquot volumes, the prep resource group and storage resource group, and the activity for which they are required. A product is a manufacturing stage, such as cell expansion, unpurified bulk, drug substance, column packing/unpacking, membrane installation/uninstallation, and product changeover.
After revision the process template is used to configure a load file for import into the finite scheduling system. A load file is a database made up of spreadsheets, each sheet corresponding to a scheduling entity and configured with its attributes. It is the computer-friendly equivalent of the process templates. By configuring the scheduling model using external databases, a record of the model configuration is kept for reference and future model modification or re-creation. Once load file configuration is complete, an interface is used to import the configuration data into the scheduling software.
Changes and updates to master data must be made on the process template as well as the load files, from which the model will be updated. Figure 5 illustrates the flow of information. This ensures data accuracy across the master data and the scheduling model.
Optimization of Solution Preparation in a Multi-product Facility
In a multi-product facility, inflow solutions may be produced in-house by central services. These central service areas, one for buffer prep and another for media prep, must accommodate manufacturing operations which could be any combination of products running. In order to minimize the number of preps (and therefore time and cost), aliquot volumes must be combined into larger preps according to production demand. This differing demand leads to unique scheduling of inflow aliquots into larger solution preps.
The scheduling of solution preps presents some challenges. The legacy system, manual scheduling of each buffer and media prep in spreadsheets, involves copying and editing spreadsheet templates. Performing this scheduling process manually is time consuming and error prone. Combining aliquots into larger preps requires case-by-case calculation. Additional factors of hold up volume or transfer line loss volume, which differ depending on the number of aliquots and the preparation vessel, must also be added on a case-by-case basis. Solution expiry is not tracked and therefore misses may occur with solutions expiring or not being prepared before use. The limiting factors of clean-in-place (CIP) skid utilization for the stainless steel prep vessels must be guessed at based on the scheduler’s experience.
In addition to scheduling vessel usage, the use of disposable technologies presents a demand for materials: the bag for the single-use mixer, aliquot storage bag, and filter are all examples of disposable materials which must be planned for, and storage tote availability is a limiting resource.
In the new, data-driven scheduling system, solution preps are scheduled with the aid of automation and solvers. Aliquot volumes are combined according to prep vessel capacity, and line loss volumes are calculated and added according to the rules of each vessel and the number of aliquots. Preps which would create aliquots too late or too early are flagged as having violated expiry constraints. The use of storage totes between aliquoting and consumption is tracked to account for the availability of empty totes. Significantly, CIP skid usage is optimized according to the prep vessel’s properties and for the back-to-back preparation of solutions with equal or increasing molarities which do not require a CIP in between. All of these defined constraints aid the scheduler significantly in both efficiency and accuracy of scheduling buffer and media preps to support manufacturing.
The successful implementation of the scheduling tool is a major milestone for a manufacturing facility. Although the preliminary stages of implementation recommended in this article may seem extensive, the value added from following this methodology is critical in ensuring success once the system is live. The reporting capabilities, well-defined and well-implemented, provide visualization of the schedule to all of the functional areas involved. As the demands on the facility increase and the plant expands and evolves, this system becomes a crucial tool to enable that growth, ensuring correct process execution, mitigating risk as a result of scaling errors, and ensuring supply.
Megan Rabideau is a Manufacturing Systems Engineer at Shire’s Lexington, MA site. She has two years of experience in the biopharmaceutical industry, specializing in electronic applications for manufacturing including finite scheduling, manufacturing execution system (MES), and data historian. Megan holds a Bachelor of Science in Chemical Engineering (BSChE) from Tufts University.
John Maguire is the Associate Director of Manufacturing Systems at Shire’s Lexington, MA site. His main area of expertise is the application of process engineering and operational technology for life sciences. He has over 15 years of process engineering experience, including functional specification development and start-up of several drug substance manufacturing facilities in Ireland and the United States. John led the design and onboarding of single-use systems for Shire’s Lexington, MA manufacturing facility, which won an honorable mention for Facility of the Year in 2011. As a subject matter expert, John provides operations technology perspective for ERP and more recently to finite scheduling systems at Shire. John holds a Bachelor of Arts degree in Natural Science.
Gloria Gadea-Lopez, Ph.D. is Director of Production and Business Systems at the Shire Lexington, MA site, where her team is responsible for business operations analysis and manufacturing technology. She started her career in biotechnology at Genzyme, where she acquired significant experience as a process engineer, supporting manufacturing operations and process development. The second half of her career involves the combination of process operations and technology, implementing GMP-compliant applications for devices and biologics manufacturing. She has led the deployment of integrated systems for data analytics and process monitoring including Statistica, OSI PI, and Simca, Manufacturing Execution Systems (MES), machine vision for packaging inspection, and finite scheduling. She holds degrees in Chemical Engineering (BS), Food Science (MS) and a Ph.D. in Biosystems Engineering. Of particular interest is the impact of single use technologies (disposables) on MES, automation and data analytics. Gloria and her team also enjoy collaborating with academia, for example with the BioProcess Labs at MIT, and with the Technology, Engineering and Management project at Queen’s University in Canada.
Subscribe to our FREE newsletter and WEBINAR UPDATES