Integration of Observed Data and Reaction Mechanisms in Deep Learning for Designing Sustainable Glycolic Acid


4
Integration of Observed Data and Reaction Mechanisms in Deep Learning for Designing Sustainable Glycolic Acid


Xin Zhou


Ocean University of China, College of Chemistry and Chemical Engineering, 238 Songling Road, Qingdao, Shandong 266100, China


4.1 Introduction


Dwindling fossil fuel resources [1], increasingly serious waste disposal problems [2], and the accumulation of nonbiodegradable plastic waste have become significant problems for the environment and sustainable ecosystems [3, 4]. To date, the world is estimated to have produced over eight billion tons of nondegradable plastic waste [5], of which only 9% was recycled and another 12% is incinerated [6]. The remaining 79% of trash accumulated in landfills or the natural environment is thrown out and deposited on refuse landfills or aquatic ecosystems [7]. The singularly inappropriate approach to plastic solid waste directly contributes to a surge in CO2 emissions [8, 9], which in turn contributes to global warming and brings a grievous threat to the environment and health of living things on the earth [1012]. Worldwide, the development of degradable materials [1315], has become the “hot spot” to deal with the ecological and environmental problems caused by nondegradable plastics [16, 17].


Poly glycolic acid (PGA) [18, 19], a synthetic polymer material with high-level biodegradability, makes it an ideal and versatile material for developing biodegradable and biomass-derived plastics to replace nondegradable plastics [2022]. PGA also exhibits good biocompatibility and can be applied to human tissue and biomedical engineering, such as absorbable surgical sutures, biodegradable cardiovascular stents, and enhancing bone regeneration [23, 24]. The annual demand for PGA has increased rapidly [25]. The global bioresorbable polymers market size is estimated to be USD 417 million in 2022 and is projected to reach USD 688 million by 2027, at a compound annual growth rate (CAGR) of 10.5% between 2022 and 2027 [26]. Statistics show that the global demand for PGA Market is presumed to reach the market size of nearly USD 15.64 Billion by 2032 from USD 6.62 Billion in 2023 with a CAGR of 10.02% under the study period 2024–2032. [27]. However, faced with the above momentous application requirements, the production capacity of PGA is insufficient. One of the main reasons is the inadequate production capacity of PGA’s high-purity monomer GA [19]. The hydroxyl and carboxyl groups in the GA structure can be complex with other ions. This complex is difficult to remove. In addition, GA is easy to self-polymerize into low-molecular-weight glycolic acid dimer, glycolic acid tetramer, and other byproducts [28]. These characteristics of glycolic acid are difficult to purify, which will directly lead to the low purity of GA crystal [29]. If the purpose is to synthesize high-purity GA crystals, this will put forward high requirements for the synthesis and separation process of GA.


The process technologies used for GA industrial synthesis and production mainly involve the hydrolysis of hydroxyacetonitrile [30], the hydrolysis of chloroacetic acid [19], the hydrolysis of methyl glycolate [31], and the carbonylation of formaldehyde [32]. Among these GA synthetic routes, developing a green process for synthesizing glycolic acid with fewer byproducts is crucial. Our previous research indicated a few byproducts from ethylene glycol as raw material through highly efficient Pt–Mn2O3 nanocatalysts, including only a tiny amount of glyoxylic acid and formic acid [33, 34]. Moreover, the industrial scale-up simulation and conceptual process design also show that the selective oxidation of ethylene glycol to GA (EGtGA) has superior techno-economic performance to conventional processing routes [35]. Therefore, the direct preparation of GA from ethylene glycol with a low market price has a promising industrial application prospect. It is the premise of producing high-purity GA to identify the essential factors affecting the target product GA, improve the yield of target product GA, and reduce the content of other byproducts [36, 37]. The operation parameters of the EGtGA process, such as reaction temperature and residence time, affect the GA yields, process energy consumption, and net present value [35]. Hence, it is feasible to improve the yield and quality of GA by adjusting these key factors. In addition, other related aspects in the EGtGA process, such as temperature, catalysts [38], and separation process [39], may also substantially impact GA’s yields and techno-economic performance. However, due to the high experimental costs and the time and energy cost of repeating experiments, it is a great challenge to study all the above factors and determine the optimal value of essential elements to manufacture GA with high purity.


Due to the rapid rise of artificial intelligence [40], machine learning (ML) methods have gradually evolved into an effective strategy to address complex nonlinear issues (i.e. regression and classification with high order and multiloop) [41]. The ML approach has been employed to predict the yields of complex catalytic processes in the chemical engineering field [42, 43]. Using artificial intelligence to model and predict the properties and distribution of products in the chemical engineering reaction process can significantly save time and cost. The random forest (RF) ML model with high prediction accuracy was established to predict the mass yields and properties of specific products produced by the reaction of different raw materials under other conditions [44]. Compared with the RF model and support vector machine model, the prediction accuracy of the deep neural network (DNN) model is the highest, and the average R2 of the optimized DNN model is higher than 0.90. Thybaut and coworkers systematically compared the prediction results of ML methods in Fischer–Tropsch reaction models [45]. The results show that the deep belief network (DBN) model is superior to other verifying indicators [45, 46]. Furthermore, multiple linear regression, support vector machines, extreme gradient boosting (XGboost), and RF models have been employed to guide the optimization of product distribution and improve the yield of target products in the catalytic conversion process [47, 48]. Based on the above research background, selecting appropriate artificial intelligence modeling methods and developing ML prediction models with good accuracy will play an essential role in improving the yield of target products in the chemical process. Hence, many studies have been carried out to make up for the gap between the prediction of input variables, including raw material properties and operating conditions, and the yield and properties of output key target products [49, 50]. To improve the accuracy of artificial intelligence modeling, a large amount of data is usually needed for training. However, in the research and development stage of the chemical process, especially in the laboratory pilot stage, it is incredibly time-consuming and costly to obtain massive experimental data (such as thousands of sets of data) required for artificial intelligence modeling. Moreover, the progress of existing artificial intelligence model research in accelerating engineering and guiding the production of polymer-grade pure glycolic acid is still limited.


In the previous work, the laboratory pilot stage, reaction kinetics solution, and conceptual process design of glycolic acid production were realized by selective oxidation with ethylene glycol as raw material and efficient Pt-based nanocatalysts [30]. Based on this, the ML prediction model of the yield and characteristics of polymerization grade glycolic acid “dual-core driven” by coupling data and reaction kinetics is successfully established in this study. The hybrid data and mechanism “dual-core driven” deep learning model, termed the HDM model, is developed to guide experimental research and optimize operating conditions. Concretely, data expansion is implemented to enrich the cube based on reaction kinetics to meet the challenge of data scarcity. In this study, more than 6000 mechanism-driven virtual data points are created and synthesized to enhance the model’s prediction ability, which is the main innovation of this paper. Then, the ML model is trained to predict the yield of the process products. The relative importance of each input to the target is further analyzed. The multi-objective optimization method based on the genetic algorithm is used to optimize the expected polymer-grade glycolic acid production process (maximize the yield, purity, and economic benefits of glycolic acid as well as minimize energy consumption). Finally, experiments and process simulation verify that the optimized operating conditions can produce polymeric glycolic acid with low energy consumption, high purity, high yield, and high economic performance.


4.2 Methodology


The framework for predicting GA production performance includes five steps, as illustrated in Figure 4.1. Through the case study of the EGtGA process, the detailed implementation of our framework is further introduced.


Step 1: Database generation. The feature variables were first determined. Based on the existing experimental data, a representative reaction kinetic model, including the EGtGA process, was established and simulated to generate the yield dataset of key products for steady-state simulation. The correlation between features and targets was further analyzed.

A set of four frameworks for predicting glycolic acid production using deep learning. The framework involves several steps, from data generation and preprocessing to model training and evaluation. Step 1: Generation of Dataset includes feature identification, process simulation, and correlation analysis. Step 2: Data Preprocessing includes developing a database and deep learning programming. Step 3: Deep Learning includes multifull connection D N N and framework ResNet.

Figure 4.1 The overall research framework for predicting glycolic acid production by selective oxidation of ethylene glycol using deep learning.


Source: Zhou et al. [51]/with permission from John Wiley & Sons.


Step 2: Data preprocessing. Developing the detailed database and introductory programming (i.e. loading ML library functions and reading database files).


Step 3: Deep learning. Building ML or deep learning models and performing model training, testing, and prediction.


Step 4: Comparative analysis. The performance and robustness of various ML models, namely deep neural networks, DBNs, fully connected residual network (FC-ResNet), and RF regression, were compared to identify the most appropriate ML model for the EGtGA process.


Step 5: Optimization and prediction. Predict the yields and performance of critical products in the EGtGA process to verify the trained FC-ResNet model. The optimal operating parameters for the highest yield of GA were obtained and verified by experiments. The genetic algorithms were used to optimize the model parameters (such as iterations) of the ML model, and the optimized parameters were used to train the ML model.


Step 6: Multidimensional evaluation. The EGtGA process was deeply analyzed and multi-dimensionally evaluated using optimal operating parameters based on the life cycle, economic, and social environment framework.


4.2.1 Database Generation


The database used to train the ML model consists of many schemes with different characteristics and corresponding production performance. The first step of building the database is to select representative parameters as training features among the factors affecting GA production. We decided on 10 features that play an essential role in GA production as features (see Table 4.1). The EGtGA process model was established based on the conceptual design flow chart (see Figure 4.2a).


A total of 610 sets of experimental datasets using different reaction conditions and catalysts were collected. Because of the advantages of steady-state simulation and process control, Aspen Plus has been widely used in process simulation, optimization, and prediction. Hence, the general process simulation software Aspen Plus was also applied to establish the simulation model of the EGtGA process. The process simulation consists of 13 process unit modules: compressor, continuous stirred tank reactor, vacuum column, and vacuum dividing wall column, as shown in Figure 4.2b. The binary interaction parameters of stream material compositions using the non-random two liquid (NRTL) method are adopted to calculate the phase equilibrium. The reaction kinetics and process models were established based on these experimental data. By using the process model developed by Aspen Plus, abundant process simulation data can be obtained. As a result, Aspen Plus process simulation software is applied to generate the data of 6110 cases through process simulation, fill the ML models, and enhance the database. It is worth noting that the hold-out method is used to achieve the dataset partitioning [51]. A small part is taken as the test set and verification set, and the rest is taken as the training set. The dataset partitioning details are shown as follows, training set:verification set:test set = 80% : 10% : 10%. The correlation performance was identified by analyzing the relationship between essential process operating parameters (reaction temperature, reaction residence time, and reaction pressure) and conversion and GA yield. The reaction networks of the EGtGA process and reaction activation energies (Pt loading 1.4 wt%; Mn loading 0.9 wt%) are demonstrated in Figure 4.2c.


Table 4.1 The selected features and desired targets for deep learning.




















































































No. Items Unit Description
F1 Temperature °C Reaction temperature
F2 Time hour Resident time
F3 Pressure MPa Reaction pressure
F4 E1 kJ mol−1 The reaction activation energy in the first reaction
F5 E2 kJ mol−1 The reaction activation energy in the second reaction
F6 E3 kJ mol−1 The reaction activation energy in the third reaction
F7 E4 kJ mol−1 The reaction activation energy in the fourth reaction
F9 Pt loading wt% Pt atom loading in PtMn/MCM-41 nanocatalysts
F0 Mn loading wt% Mn atom loading in PtMn/MCM-41 nanocatalysts
F10 TOF h−1 Turnover frequency
F11 Conversion % The conversion of feedstock ethylene glycol
F12 GA yields wt% The yields of desired product glycolic acid
F13 GAD yields wt% The yields of byproduct glycolaldehyde
F14 CO2 yields wt% The yields of byproduct CO2
F15 FA yields wt% The yields of byproduct formic acid
A detailed process flow and reaction mechanisms for the production of glycolic acid through the selective oxidation of ethylene glycol. It is divided into three sections. A. A flow diagram includes a Continuous Stirred-Tank Reactor C S T R, a flash separator, and a vacuum dividing wall column. It displays the movement of reactants ethylene glycol and oxygen and products formic acid and glycolaldehyde. The design also includes recycling of oxygen and ethylene glycol EG. B. A flow diagram includes recycled oxygen R C Y-O subscript 2, purge gas, carbon dioxide CO subscript 2, formic acid F A, glycolaldehyde G A, and recycled ethylene glycol R C Y-E G.

Figure 4.2 (a) conceptual design process flow chart; (b) simulation flowsheet in Aspen Plus; (c) reaction network and kinetics parameters (Pt loading 1.4 wt%; Mn loading 0.9 wt%).


The computing environment conditions in this study are CPU: Intel (R) Core (TM) i7-10,700 CPU@2.90GHZ and GPU: NVIDIA GTX 1060. Therefore, a total of 5040 complete datasets, including experimental and simulation data, composed of process data, reaction kinetic parameters, and product yields, constitute a comprehensive database required for ML training.


4.2.2 Deep Learning


This work’s ML models mainly include DNN and RF. The neural network models considered in this work include a deep neural network, a DBN, and a FC-ResNet. The integrated algorithm, RF, is illustrated in the following part.


4.2.2.1 Deep Neural Networks


The DNN is the feedforward neural network with multilayers [46]. It is one of the simplest neural network forms and has good prediction performance. These networks usually have three layers: the input layer, hidden layer, and output layer, as illustrated in Figure 4.3a. A gradient descent optimization method called backpropagation identifies or trains weights in neural networks. This procedure is represented visually in Figure 4.3b. In the feedforward neural network, the input layer has the same number of nodes as the input, and the input is directed to the subsequent layer. One or more hidden layers calculate the weighted sum of all outputs from the previous layer and regularize the weighted sum using the activation function. Activation functions include linear, sigmoid, hyperbolic tangent, exponential linear unit, rectified linear unit, and scaled exponential linear unit. The output layer repeats the same process as the hidden layer with its unique weight. Each weight in the system will be updated in each training process, and the updated amount is determined by distinguishing the prediction error from their respective weights. Using the differential chain rule, the backpropagation algorithm is computationally efficient because almost all components of each derivative are calculated during the forward transmission (i.e. prediction calculation) of the network and can be reused when updating the weight.

A. A neural network architecture represents three layers input layers, hidden layers, and output layers. B. A backpropagation calculation algorithm in a neural network. It includes a diagram of a single neuron with incoming weights, outgoing weights, slope of activation.

Figure 4.3 (a) Neural network structure of each layer node in DNN; (b) backpropagation calculation algorithm.


4.2.2.2 Deep Belief Networks


The DBN is a probability generation model for unsupervised and supervised learning [52]. Compared with the traditional neural network, the generation model establishes a joint distribution between observation data and labels. DBN comprises multilayer neurons divided into dominant and recessive neurons. The bottom layer represents data vectors, and each neuron represents one dimension of the data vector. The component of DBN is restricted Boltzmann machines (RBMs). The DBN model can extract features layer by layer from the original data and obtain high-level representations through the layer-by-layer stacking of RBMs. Its core is to use the layer-by-layer greedy learning algorithm to optimize the connection weight of the deep neural network, that is, first use the unsupervised layer-by-layer training method to effectively mine the fault characteristics of the equipment to be diagnosed, and then, based on adding the corresponding classifier, optimize the fault diagnosis capability of the DBN through the reverse supervised tuning. The unsupervised layer-by-layer training can learn complex nonlinear functions by directly mapping data from input to output, which is also the key to its robust feature extraction ability. The training goal of RBM is to make the Gibbs distribution fit the input data as much as possible, that is, to make the Gibbs distribution represented by the RBM network as close as possible to the distribution of input samples. The Kullback–Leibler (KL) distance between the sample distribution and the edge distribution of the RBM network can be used to express the difference between them. Because the deep network quickly falls into local optimization, and the selection of initial parameters significantly impacts where the network finally converges, DBN training is divided into pretraining and fine-tuning. RBM performs layer-by-layer unsupervised training on the deep network and takes the parameters obtained from each layer of training as the initial parameters of neurons in each layer of the deep network. This parameter is in a good position in the deep network parameter space, pretraining. After RBM trains the initial values of the deep network parameters layer by layer, it trains the deep network with the traditional backpropagation (BP) algorithm. In this way, the parameters of the deep network eventually converge to a good position, that is, fine-tuning. In the pretraining, the unsupervised layer-by-layer learning method is used to learn the parameters. First, the data vector x and the first hidden layer are used as an RBM to train the RBM parameter. Then, the RBM parameter is fixed, h1 is used as the visible layer vector, h2 is used as the hidden layer vector, and the second RBM is trained.


The whole neural network can generate training data according to the maximum probability by training the weights between its neurons. DBN can be used not only to identify features and classify data but also to generate data. DBN comprises multilayer neurons divided into dominant and recessive neurons. The graphic element accepts input and the implicit part extracts features. Therefore, hidden elements also have an alias called feature detectors. The connection between the top two layers is undirected and forms associative memory. There are connections between other lower layers and directed connections up and down. The bottom layer represents data vectors, and each neuron represents one dimension of the data vector. RBM and its training process are shown in Figure 4.4a. The method of training DBN is carried out layer-by-layer. The hidden layer of the previous RBM provides input to the next visible layer. Figure 4.4b illustrates a schematic diagram of a DBN composed of three RBM layers. In each layer, the data vector is used to infer the hidden layer, and then the hidden layer is regarded as the data vector of the next layer. Each RBM can be used as a separate cluster. RBM has only two layers of neurons. One layer is called the visible layer, composed of visible units used to input training data. The other layer is called the hidden layer. Accordingly, it is composed of hidden units used as feature detectors. The visible layer of the lowest RBM receives the input characteristic data, and the hidden layer of the highest RBM is connected to the backpropagation layer to obtain the model output.

A diagram illustrates two concepts in machine learning: the network structure and training process of Restricted Boltzmann Machines R B Ms and Deep Belief Networks D B Ns. A. R B M consists of a visible layer, hidden layer, and connections. Each visible node is connected to every hidden node, which enables the model to learn the probability distribution of the input data through a process called sampling. B. A diagram displays an input layer, and three hidden layers are Hidden layer 1, Hidden layer 2, and Hidden layer 3, each corresponding to an RBM (RBM1, RBM2, RBM3).

Figure 4.4 (a) Network structure and training process of restricted Boltzmann machine; (b) the structure of DBN composed of three-layer RBM.


4.2.2.3 Fully Connected Residual Networks


The model accuracy is continuously improved with the continuous increase in network layers. However, the network reaches saturation when the network layers increase to a certain number. Training and testing accuracy will decline rapidly, indicating that the deep network with more layers is challenging to train. The deeper the network layers are, the worse the prediction effect is because the neural network will continuously propagate the gradient in the backpropagation process. When the network layers are more profound, the gradient will gradually disappear in the propagation process (if the sigmoid function is used, for the signal with an amplitude of 1, the gradient will decay to the original 0.25 every time it is transmitted back, and the more layers, the more severe the attenuation). As a result, the weight of the previous network layer cannot be effectively adjusted. He et al. proposed the residual network (ResNet) [53]. The ResNet model can continue improving the network’s training and test accuracy when the original network is close to saturation. Adding ResNet structure to the network can solve the problematic network convergence caused by deepening the network layers and helping the whole network converge faster. Figure 4.5a shows a 4-layer fully connected network with 14 nodes in the input layer and 5 nodes in the output layer. The general network structure is shown in Figure 4.5b, and the ResNet structure is shown in Figure 4.5c. When the ResNet is constructed in the form of a full connection, it constitutes a FC-ResNet. It is difficult to directly let some layers fit a potential identity mapping function (see Eq. (4.1)), which may be why it is challenging to train the deep neural network. However, if the network is designed as Eq. (4.2), we can transform it into learning a residual function, as defined in Eq. (4.3). When F(x) equals zero, it constitutes an identity map. If the residual is not zero, the accumulation layer learns new features based on the input features to perform better.




A set of three diagrams illustrates neural network structures: A. a diagram includes 14 input nodes and 5 output nodes. Each node in one layer is connected to every node in the next layer, displaying the flow of information from the input to the output. B. A structure depicts a sequence of two weight layers followed by ReLU (Rectified Linear Unit) activation functions. C. A structure includes a shortcut connection that adds the input  to the output of the weight layers and ReLU activations.

Figure 4.5 (a) 4-Layer fully connected network with 14 nodes in the input layer and 5 nodes in the output layer; (b) structure unit of general network structure; (c) structure unit of ResNet unit.


Suppose an existing shallow network (Shallow Net) has reached saturation accuracy. Then several identity mapping layers (y = x, output equals input) are added to it so that the depth of the network is increased. At least the error will not increase. That is, the deeper network should not increase the error on the training set. The inspiration of the deep ResNet is to use identity mapping to directly transmit the previous layer’s output to the rear mentioned here.


4.2.2.4 Random Forest


RF is composed of multiple decision trees [54]. Each decision tree is different. When building the decision tree, we randomly select some samples from the training data, and we will not use all the features of the data but randomly select some features for training. Each tree uses different samples and features, and the training results are also different. RF applies the general technique of bootstrap aggregating to tree learners. As depicted in Figure 4.6, assuming that the training data includes B inputs (X = X1, …, XB) and responses (y = y1, …, yB), bagging could select a random sample back and forth. It replaces the training data B times to train the tree fB on XB, and Yb (b = 1, …, b). When constructing the regression tree, m features or input variables are randomly selected from P features or input variables (m < P), called feature bagging. Each regression tree grows freely without restriction until it cannot continue to split. That is, it reaches the minimum node size. Finally, B regression trees form the RF algorithm. The advantage of RF is quite outstanding. It has the following benefits: (i) using an integrated algorithm with high accuracy, (ii) not easy to have overfitting (random samples and random characteristics), (iii) strong anti-noise ability (noise refers to abnormal data), and (iv) processing high-dimensional data with many features.

A flowsheet illustrating the random forest method. The process begins with a given data set and the data set is then divided into multiple bootstrap samples. For each bootstrap sample, a subset of features m is randomly selected. Using each bootstrap sample and the selected features, a regression tree is created. The final model output is obtained by averaging the predictions from all the regression trees.

Figure 4.6 Flowsheet of random forest method.


4.2.3 Optimization and Prediction


A genetic algorithm is based on natural selection and genetics [55, 56]. Since the 1990s, it has been widely used because of its high efficiency and strong robustness. In the genetic algorithm, the candidate solution group of an optimization problem evolves toward a better solution. Evolution starts with a group of randomly generated individuals, an iterative process. The population in each iteration is called a generation. In each generation, the adaptability of each individual in the population is evaluated. More suitable individuals are randomly selected from the current population, and each individual’s genome is modified (recombined, possibly random mutation), forming a new generation. The next-generation candidate solutions are then used in the next iteration. Usually, when the generated algebra reaches the maximum population’s satisfactory fitness level, the algorithm terminates.


In this study, the genetic algorithm and HDM models are combined to optimize the model parameters of the four deep learning models. It effectively searches the minimum objective function and avoids the calculation and analysis of each response. This paper uses the high-performance and practical Python genetic algorithm toolbox to optimize.


4.2.4 Life Cycle Multidimensional Evaluation


Although recent advances in life cycle methodology have expanded the range of environmental impacts considered, including biodiversity and ecosystem services [57], when considering social and livelihood factors, the application trend of life cycle assessment in these critical aspects is still limited. Life cycle sustainability assessment (LCSA) can assess the impact of target products, processes, and supply chain decisions on the ecological environment for projects from multiple dimensions [58]. In this study, LCSA can quantitatively provide a feasible framework for the proposed dual-core-driven process model.


To analyze the impact of the proposed HDM model on society, people’s livelihood, and the environment, the multidimensional sustainability assessment method of the life cycle proposed in our previous work is adopted. In this method, greenhouse gas (GHG) emissions and fossil energy demand (FED) are integrated into the comparison range of gross domestic product (GDP, US$) generated based on the HDM model [59, 60].


4.3 Results and Discussion


4.3.1 Data Analysis and Statistics Before Modeling


4.3.1.1 Analysis of Experimental Data


Figure 4.7 depicts the detailed data (including feature categories and ranges) with a residence time of eight hours in the experimental dataset. Other experimental data with residence times of 4–12 hours are depicted in Appendix. The X-axis and Y-axis of the three-dimensional image are reaction temperature and pressure, respectively. According to the composition analysis of chromatographic data, the reaction temperature is the main characteristic factor affecting the conversion and other key indicators. The conversion rate fluctuates in a wide range, ranging from 0% to 100%. The two basic characteristic categories are residence time and reaction pressure, the other two key indicators to determine the product distribution in the EGtGA process. GA yields, glyoxalic acid (GAD) yields, formic acid (FA) yields, and CO2 yields range from zero to the upper limit of 9–89%.


The above results show that this study’s preliminary experimental dataset is 610 groups. The experimental data is extensive enough to cover a broader range of product distributions and support constructing a mechanism model. The database of the “dual-core-driven” deep learning HDM model also includes parameters such as reaction rate constants and activation energy.


The reaction temperature of the HDM model was 20–90 °C, the reaction pressure was 1–3 MPa, and the residence time was 4–12 hours. By comprehensively analyzing Figure 4.7 and the database, we can intuitively conclude that the reaction temperature is the most crucial factor affecting the product yield of the EGtGA process. In terms of the yield of the target product, the GA yield is between 0.73% and 75.85%. The yield of CO2 and formic acid from the EGtGA process is between 0.11% and 54.11% and 0.01–9.43%. Due to the different reaction conditions of the HDM model, the product yield distribution has a large span. This also strengthens our determination to perform the HDM model to identify further and tune essential factors affecting the EGtGA process to produce the highest yield of glycolic acid product.


4.3.1.2 Data Dependence Analysis


Figure 4.8 illustrates the heat map of the linear regression correlation coefficient matrix between the selected features and the desired targets. The deep blue and light navy blue grids (absolute value ≥0.80) show strongly positive and negative correlations, respectively. It can be seen that some areas on both sides of the diagonal of the grid are relatively deep, indicating that there is a linear relationship between the conversion and immediate product yield of the EGtGA process and these features. For example, the conversion rates of the HDM model represent strongly positive correlations with reaction temperature (see F1 in Table 4.1), in which the correlation coefficient value is 0.913.

A set of five graphs depicts A. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus conversion ranges from 0 to 100. The legend on the right side of the plot ranges from 2.00 to 100.0 with different shades. B. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus GA yields ranges from 0 to 90. The legend on the right side of the plot ranges from 0.00 to 89.89 with different shades.

Figure 4.7 Data analysis and statistics with the reaction condition resident time eight hours: (a) the changing trend of conversion rates with reaction temperature and pressure; (b) the changing trend of GA yields with reaction temperature and pressure; (c) the changing trend of GAD yields with reaction temperature and pressure; (d) the changing trend of FA yields with reaction temperature and pressure; (e) the changing trend of CO2 yields with reaction temperature and pressure.


4.3.2 Model Comparison and Feature Important Analysis


4.3.2.1 Model Comparison


Further analysis is carried out based on the above four HDM models to provide more details about each target’s prediction performance and input characteristics’ role. Figures 4.94.13 show each target’s predicted and actual values. The setting of model parameters greatly influences the prediction results of R2 and root-mean-square error (RMSE) in the ML model. Unreasonable learning rates and iteration times are straightforward to produce overfitting. Therefore, DNN, DBN, and FC-ResNet have the same number of neural network iterations, 500, and the neural network learning rate is 0.0001 in this study. In the RF method, the max_depth and n_estimators are 10 and 100, respectively. The experiment and prediction values of convention rates (training and testing) in the HDM model by DNN, DBN, FC-ResNet, and RF are shown in Figure 4.9a–d. The conversion prediction result of the EGtGA process is in the range of 20–97%. It is worth noting that the R2 of the “dual-core driven” HDM model for the conversion prediction is 0.932–0.996. Moreover, a comprehensive comparison of Figure 4.9 shows that the R2 of the FC-ResNet method is the highest at 0.996, which is also the lowest of its RMSE of 1.37.

A heat map of the linear regression correlation coefficient matrix between selected features and desired targets in a chemical process. The input variables that influence the output variables (targets). In this case, the features include Pt loading, G A D yields, T O F, reaction pressure, G A yields, conversion rates, reaction temperature, resident time, F A yields, E1, E2, E3, E4, CO subscript 2 yields, and Mn loading. It is represented in different shades. On the right side, bar ranges from -1 to 1 with different shades.

Figure 4.8 Heat map of the linear regression correlation coefficient matrix between the selected features and the desired targets.


The experiment and prediction values of GA yields in the EGtGA process (training and testing) in the HDM model by DNN, DBN, FC-ResNet, and RF are shown in Figure 4.10a–d. The GA yield prediction results of the EGtGA process are in the range of 0.02–79.5%. It is worth noting that the R2 of the “dual-core driven” HDM model for the conversion prediction is 0.902–0.988. Similarly, a comprehensive comparison of Figure 4.10 shows that the R2 of the FC-ResNet method is the highest at 0.988, which is also the lowest of its RMSE of 1.95.

A set of four graphs depicts A. A graph of predicted values ranges from 0 to 100 versus experiment values ranging from 0 to 100. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.941 and R M S E = 1.55. B. A graph of predicted values ranges from 0 to 100 versus experiment values ranging from 0 to 100. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.989 and R M S E = 3.62.

Figure 4.9 Experiment and prediction values of EGtGA convention rates (training and testing) in the HDM model based on (a) DNN, (b) DBN, and (c) FC-ResNet, (d) RF.


Figure 4.11 depicts the experiment and prediction values of GAD yields in the EGtGA process (training and testing) in the HDM model by DNN, DBN, FC-ResNet, and RF. The absolute value of GAD yield prediction results of the EGtGA process is in the range of 0.02–9.21%. It is worth noting that the R2 of the “dual-core driven” HDM model for the prediction is 0.913–0.996. Similarly, a comprehensive comparison shows that the R2 of the FC-ResNet method is the highest at 0.996, which is also the lowest of its RMSE of 0.17.


Figure 4.12 illustrates the experiment and prediction values of FA yields in the EGtGA process (training and testing) in the HDM model by DNN, DBN, FC-ResNet, and RF. A comprehensive analysis shows that the prediction results of DNN and DBN are worse than those of FC-ResNet and RF. The absolute value of FA yield prediction results of the EGtGA process is in the range of 0.24–19.13%. It is worth noting that the R2 of the “dual-core driven” HDM model for the conversion prediction is 0.899–0.995. Similarly, a comprehensive comparison depicts that the R2 of the FC-ResNet method is the highest at 0.995, which is also the lowest of its RMSE of 0.35.

A set of four graphs depicts A. A graph of predicted values ranges from 0 to 80 versus experiment values ranging from 0 to 80. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.917 and R M S E = 4.85 and the test values are R power 2 = 0.902 and R M S E = 6.05. B. A graph of predicted values ranges from 0 to 80 versus experiment values ranging from 0 to 80. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.955 and R M S E = 7.24 and the test values are R power 2 = 0.964 and R M S E = 3.41.

Figure 4.10 Experiment and prediction values of GA yields in the EGtGA process (training and testing) using the HDM model based on (a) DNN, (b) DBN, and (c) FC-ResNet, (d) RF.


The experiment and prediction values of FA yields in the EGtGA process (training and testing) in the HDM model by DNN, DBN, FC-ResNet, and RF are demonstrated in Figure 4.13. Similar to Figure 4.12, it can be seen visually that the DNN and DBN prediction results are also worse than those of FC-ResNet and RF. The absolute value of CO2 yield prediction results of the EGtGA process is 0.12–77.29%. It is worth noting that the R2 of the “dual-core driven” HDM model for the conversion prediction is 0.960–0.998. A comprehensive comparison shows that the R2 of the FC-ResNet method is the highest at 0.998, which is also the lowest of its RMSE of 0.84.

A set of four graphs depicts A. A graph of predicted values ranges from 0 to 9 versus experiment values ranging from 0 to 9. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.929 and R M S E = 0.72 and the test values are R power 2 = 0.913 and R M S E = 0.73. B. A graph of predicted values ranges from 0 to 9 versus experiment values ranging from 0 to 9. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.929 and R M S E = 1.42 and the test values are R power 2 = 0.944 and R M S E = 0.61.

Figure 4.11 Experiment and prediction values of GAD yields in the EGtGA process (training and testing) using the HDM model based on (a) DNN, (b) DBN, and (c) FC-ResNet, (d) RF.


This study further adopts the “dual-core driven” models to perform the prediction and provides the calculation results of feature importance. There are apparent differences between the method of calculating feature weight by the deep neural networks and the method of calculating RF. The idea of the neural network to estimate the importance of features is based on a permutation algorithm. The core idea of this algorithm is to train a neural network first. When calculating the importance of the ith feature, disrupt the ith feature, and then calculate the index change after the disruption. The importance calculation method of the corresponding features of RF is: if the accuracy outside the bag decreases significantly after adding noise to a feature randomly, it shows that this feature has a significant impact on the classification results of samples, that is, its importance is relatively high.

A set of four graphs depicts A. A graph of predicted values ranges from 0 to 25 versus experiment values ranging from 0 to 25. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.960 and R M S E = 0.92 and the test values are R power 2 = 0.959 and R M S E = 0.95. B. A graph of predicted values ranges from 0 to 25 versus experiment values ranging from 0 to 25. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.884 and R M S E = 3.24 and the test values are R power 2 = 0.899 and R M S E = 1.60.

Figure 4.12 Experiment and prediction values of FA yields in the EGtGA process (training and testing) using the HDM model based on (a) DNN, (b) DBN, and (c) FC-ResNet, (d) RF.


4.3.2.2 Feature Important Analysis


Figure 4.14 depicts each input feature’s importance to the HDM model’s five targets. It can be found that there is a certain difference between the results of calculating the importance of characteristics in the neural network like dual-core driven model and RF. Figure 4.14a–c intuitively shows that reaction activation energy and catalyst features (including Pt loading, Mn loading, and TOF) are more critical than reaction conditions. The sum of specific feature importances (reaction activation energy, catalyst composition, and TOF) is 70–71%, which are the dominant and decisive factors. Furthermore, the results of all feature importances show that the activation energy of the second step reaction (see Figure 4.2c) is the most important feature. This also confirms our previous research results. The second step reaction is the rate control step in the process of ethylene glycol oxidation to glycolic acid.

A set of four graphs depicts A. A graph of predicted values ranges from 0 to 80 versus experiment values ranging from 0 to 80. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.950 and R M S E = 3.94 and the test values are R power 2 = 0.960 and R M S E = 3.52. B. A graph of predicted values ranges from 0 to 80 versus experiment values ranging from 0 to 80. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.938 and R M S E = 8.96 and the test values are R power 2 = 0.966 and R M S E = 3.51.

Figure 4.13 Experiment and prediction values of CO2 yields in the EGtGA process (training and testing) using the HDM model based on (a) DNN, (b) DBN, and (c) FC-ResNet, (d) RF.


From Figure 4.14, it can be found that the reaction temperature is the dominant factor. The differences between the two kinds of algorithms cause the above differences. Similarly, the sum of specific feature importance (reaction activation energy, catalyst composition, and TOF) is about 65% in Figure 4.14d, which are also the dominant factors. They all clearly showed that the reaction activation energy, catalyst composition, and TOF were the main features affecting the conversion and yield of the EGtGA reaction. Since our study is committed to providing a “dual-core drive” model that can predict and optimize the industrial application of the EGtGA process in the future, we have chosen features that are biased toward the macro in the subsequent optimization process. Hence, catalyst compositions (Pt loading and Mn loading) and reaction conditions (reaction temperature, reaction time, and reaction pressure) were selected as decision variables of the multi-objective optimization.

A set of four graphs depicts A. A bar chart displays the relative magnitudes of the reaction activation energies, with F5 having the highest value and F9 having the lowest. The pie chart highlights the dominant contribution of F5 to the total energy barrier. B. A bar chart and a pie chart display a different distribution of activation energies. F5 still has a significant contribution, but other reactions F1, F2, F4 also play a more substantial role.

Figure 4.14 Results of feature importances for: (a) DNN, (b) DBN, (c) FC-ResNet, and (d) RF.


4.3.3 Performance and Feature Analysis of the Optimized FC-ResNet–GA Model


By comprehensively analyzing four deep learning algorithms (DNN, DBN, FC-ResNet, and RF), it can be intuitively concluded that compared with the other three methods, FC-ResNet performs very well in both the training set and the test set. For the training set, the RMSE of the FC-ResNet algorithm for predicting conversion rate, GA yield, GAD yield, FA yield, and CO2 yield are 1.55, 1.18, 0.15, 0.25, and 0.17, respectively. For the test set, the RMSE of the FC-ResNet algorithm for prediction conversion, GA yield, GAD yield, FA yield, and CO2 yield are 1.37, 1.95, 0.17, 0.25, and 0.17, respectively. For the experimental data, the RMSE of the FC-ResNet algorithm for both the training set and test set is the lowest, showing excellent prediction performance. Therefore, the model parameters of FC-ResNet, by integrating the GA, were optimized in this section. Moreover, the optimized FC-ResNet algorithm (termed FC-ResNet–GA) will also be applied to train and test 6720 datasets (including 610 mechanism datasets and 6110 experimental datasets).


Figure 4.15 shows the training and test results for the simulation and experiment data in the EGtGA process using the HDM model based on the FC-ResNet–GA model. As illustrated in this figure, the RMSE of the FC-ResNet–GA model for the training set to predict conversion rate, GA yield, GAD yield, FA yield, and CO2 yield are 1.53, 1.93, 0.03, 0.05, and 0.86, respectively. For the test set, the RMSE of the FC-ResNet algorithm for prediction conversion, GA yield, GAD yield, FA yield, and CO2 yield are 1.12, 1.29, 0.06, 0.07, and 0.56, respectively.


4.3.4 Process Multi-objective Optimization and Experimental Verification


Further investigation is carried out, and multi-objective optimization of process parameters is implemented using the HDM model based on the FC-ResNet–NSGA-III model. In this section, this study uses Python 3.8 to program and apply the NSGA-III algorithm in Pymoo to perform multi-objective optimization of process parameters and then solve the problem. Finally, the Pareto solution set that meets the requirements is obtained. The objective functions and constraints are shown in Eq. (4.4).



The results of Pareto sets (mainly including reaction conditions and supported catalyst composition for experimental verification, such as reaction temperature, reaction pressure, reaction time, Pt loading, and Mn loading) for the process multi-objective optimization using the NSGA-III algorithm and experiment verification are illustrated in Figure 4.16. It should be pointed out that the results of Pt and Mn loading in the Pareto sets are 1.40 and 0.90, respectively. The experimental verification of repeatability is performed by adopting the feasible solution in the Pareto solution set. The relative errors between experiment and simulation repeatability verification of EG conversion and GA yield are about 4%, while the relative errors between experiment and simulation repeatability verification of GAD yield, CO2 yield, and FA yield are about 10%. The main reason is that GAD yields, CO2 yields, and FA yields are all in the range of low values (0–2%), and a small absolute error could bring about relatively significant errors. More detailed results of Pareto sets and experiment verification are shown in Appendix.

A set of six graphs depicts A. A graph of predicted values ranges from 0 to 100 versus experiment values ranging from 0 to 100. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.998 and R M S E = 1.53 and the test values are R power 2 = 0.998 and R M S E = 1.12. B. A graph of predicted values ranges from 0 to 80 versus experiment values ranging from 0 to 80. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.997 and R M S E = 1.93 and the test values are R power 2 = 0.966 and R M S E = 1.29. C. A graph of predicted values ranges from 0 to 9 versus experiment values ranging from 0 to 9. It represents a plot graph with different shapes and shades labeled as train and test. The train values are R power 2 = 0.994 and R M S E = 0.03 and the test values are R power 2 = 0.966 and R M S E = 0.06.

Figure 4.15 The train and test results for the simulation + experiment data in the EGtGA process using the HDM model based on the FC-ResNet-GA model: (a) conversion, (b) GA yields, and (c) GAD yields, (d) FA yields, (e) CO2 yields, and (f) importance analysis.

A. A 3D graph of pressure ranges from 0.8 to 1.8 versus temperature ranges from 60 to 74 versus reaction time ranges from 8 to 12. It represents a scatter plot graph. B. A graph of optimized simulation values ranges from 0 to 80 versus experimental values ranges from 0 to 80. It represents a plot graph with different shades and shapes labeled as conversion percent, G A yields, G A D yields, C O subscript 2 yields, F A yields.

Figure 4.16 (a) The results of Pareto sets for the process multi-objective optimization using the NSGA-III algorithm and (b) experiment verification.


4.3.5 LCSA Based on the Optimized Parameters


4.3.5.1 Original Life Cycle Framework


Further LCSA investigation is implemented based on the optimized parameters. The original framework to execute the life cycle assessment is shown in Figure 4.17 [61, 62]. It should be mentioned that the black dashed box indicates the existing foundation, while the red implementation box indicates that the execution starts from the current step. As demonstrated, one of the feasible solutions applied in this study is reaction temperature, reaction pressure, reaction time, Pt loading, and Mn loading from the Pareto-optimal set, which are 63.53 °C, 1.54 MPa, 9.81 hours, 1.40 wt%, and 0.90 wt%, respectively. The selected feasible solution from the Pareto-optimal set is used to carry out the process simulation.


4.3.5.2 Life Cycle Inventory Analysis


Detailed and comprehensive life cycle inventory analysis is a crucial procedure for implementing LCSA, which can intuitively give critical data such as nonrenewable resources and energy consumption in different life cycle stages. These inventory data will lay a solid foundation for implementing sustainable life cycle interpretation and evaluation. Figure 4.18 demonstrates the life cycle assessment results of inventory values, which contain FED and GHG emissions in the form of direct and indirect for creating the one million GDP. Figure 4.18a shows the life cycle inventory analysis before optimization for the ethylene glycol (EG) to GA process using biomass as feedstock. The simulation data and inventory data are from our previous work [33, 34]. The simulation and inventory data of life cycle inventory analysis after optimization are depicted in Figure 4.18b. By comprehensively comparing the data before and after optimization, it can be intuitively concluded that the use of biomass raw materials has been reduced by about 3% by using the optimization strategy proposed in this study.

A structured framework depicts the life cycle of multi-objective sustainable optimization in industrial processes. It is represented in four levels labeled as conceptual design, lab-scale experiment, simulation and optimization, life cycle sustainable assessment.

Figure 4.17 The framework of life cycle multi-objective sustainable optimization. The black dashed box indicates the existing foundation, while the red implementation box indicates that the execution starts from the current step.


4.3.5.3 Life Cycle Sustainable Interpretation and Assessment


Figure 4.19 illustrates the life cycle sustainable interpretation and evaluation results based on the inventory analysis from Figure 4.18. After optimization, the EG to GA process identifies obvious techno-economic and environmental advantages. As shown in Figure 4.19a, the FED after optimization is 6201.91 MJ (one million GDP)−1, which has decreased by 2.96%. The GHG emissions before optimization are 39.26 tCO2eq (one million GDP)−1, 1.03 times larger than the emissions after optimization (see Figure 4.19b). The breakdown analysis of FED and GHG is illustrated in Figure 4.19c,d. The decomposition results of each unit’s life cycle sustainability analysis are carried out to recognize further the bottleneck units hidden in the EGtGA process. The syngas production process contributes the most to FED and GHG emissions, accounting for about 68% of the whole FED and 45% of the whole GHG, respectively. The sustainability analysis results intuitively indicate that syngas production is the most extensive bottleneck process. Therefore, optimizing parameters in the syngas production process and developing energy recovery technology is virtual to enhance the process performance.

A set of two block diagrams depicts the life cycle inventory analysis for the E G to G A process using biomass as feedstock. It compares the process before and after optimization. A. A block diagram illustrates the process before optimization. The process starts with biomass exploration, transport, biomass to syngas, syngas fraction, syngas to D M O, D M O, D M O hydrotreating, E G, E G oxidation, G A, air compression. B. A block diagram depicts the process after optimization. It represents the arrows with different shades.

Figure 4.18 The life cycle inventory analysis for the EG to GA process using biomass as feedstock: (a) before optimization; (b) after optimization.

A set of four graphs depicts A. A bar graph of F E D/M J.(G D P subscript U S D power million) power -1 ranges from 0E + 00 to 8E + 03 versus the x-axis labeled as before optimization and after optimization. It represents the different shades which are labeled as natural gas, crude oil, and crude coal. B. A bar graph of G H G emission/tC O subscript 2 eq.(G D P subscript U S D power million) power -1 ranges from 0 to 42 versus the x-axis labeled as before optimization and after optimization. It represents the different shades which are labeled as C H subscript 4, N subscript 2 O, C Osubscript 2.

Figure 4.19 Overall resource consumption, greenhouse gas emission results, and the breakdown distribution analysis of FED and GHG for the EG to GA process: (a) Overall results of FED; (b) Overall results of GHG; (c) breakdown results of FED; (d) breakdown results of GHG.


4.4 Conclusion


The kinetic reaction mechanism, catalyst properties, and reaction conditions in the cases of selective oxidation at low temperatures without alkali for bio-GA production were collected to construct a hybrid deep learning framework driven by data and reaction mechanisms for predicting sustainable glycolic acid production performance. The FC-ResNet model exhibited superior performance for the prediction of conversion rate, GA, and byproduct yields. The feature importance was further analyzed. Reaction activation energy and other features (Pt loading, Mn loading, and TOF) are far less critical than reaction conditions and rates. The reaction conditions are the dominant and decisive factor, the influence of the reaction rate is low, and other characteristics are far less critical than reaction conditions. The HDM model based on the FC-ResNet–NSGA-III model is implemented with multi-objective optimization of process parameters. The results of Pareto sets were obtained, and the optimized models were experimentally validated. The selected parameters, including reaction temperature, reaction pressure, reaction time, Pt loading, and Mn loading from the Pareto-optimal set are 63.53 °C, 1.54 MPa, 9.81 hours, 1.40 wt%, and 0.90 wt%, respectively. Detailed and comprehensive life cycle inventory analysis was also implemented. The LCSA further identifies that using the optimized operating parameters, the FED, and GHG have decreased by 2.96% and 3.00%, respectively. The hybrid data and mechanism (HDM) model offers a novel insight and strategy to accelerate the engineered selective oxidation for desired GA production.


4.A Pareto Optimization Set


Pareto-optimal solution is a multi-objective intelligent algorithm. Pareto-optimal solution is a multi-objective solution that will filter out a relatively optimal set of solutions. Pareto will be used to find a reasonably optimal solution or optimal solution. In this study, the Pareto-optimal solution set is shown in Figure 4.A.1. The Pareto-optimal solution set only provides noninferior solutions of the problem to multiple objectives. In this study, the reaction conditions in the red box are selected as the optimal solution.

A table with eight columns lists various parameters and their corresponding values for each solution in the Pareto optimal set. It includes T/degrees C, P/Mpa, Time/h, Conversion, G A yields/wt percent, G A D yields/wt percent, C O subscript 2 yields/wt percent, and F A yields/wt percent.

Figure 4.A.1 The Pareto optimization Set.


4.B Experimental Data


Data for reaction residence times of 4–12 hours was established and shown in Figure 4.B.1 and 4.B.2.

A set of five graphs depicts A. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus conversion ranges from 0 to 80. The lengend on the right side of the plot ranges from 0.6000 to 85.60 with different shades. It represents a ribbon-like structure. B. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus GA yields ranging from 0 to 60. The lengend on the right side of the plot ranges from 0.00 to 65.00 with different shades. C. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus G A D yields ranging from 0 to 7. The lengend on the right side of the plot ranges from 0.1400 to 7.740 with different shades.

Figure 4.B.1 Data analysis and statistics with the reaction condition resident time four hours: (a) the changing trend of conversion rates with reaction temperature and pressure; (b) the changing trend of GA yields with reaction temperature and pressure; (c) the changing trend of GAD yields with reaction temperature and pressure; (d) the changing trend of FA yields with reaction temperature and pressure; (e) the changing trend of CO2 yields with reaction temperature and pressure.

A set of five graphs depicts A. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus conversion ranges from 0 to 100. The lengend on the right side of the plot ranges from 2.500 to 100.0 with different shades. It represents a ribbon-like structure. B. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus GA yields ranging from 0 to 90. The lengend on the right side of the plot ranges from 0.00 to 89.41 with different shades. C. A 3D graph of pressure ranges from 0 to 30 versus temperature ranges from 20 to 100 versus G A D yields ranging from 0 to 80. The lengend on the right side of the plot ranges from 0.6000 to 76.00 with different shades.

Figure 4.B.2 Data analysis and statistics with the reaction condition resident time 12 hours: (a) the changing trend of conversion rates with reaction temperature and pressure; (b) the changing trend of GA yields with reaction temperature and pressure; (c) the changing trend of GAD yields with reaction temperature and pressure; (d) the changing trend of FA yields with reaction temperature and pressure; (e) the changing trend of CO2 yields with reaction temperature and pressure.


4.C Construction Method of Process Simulation Database Using Reaction Mechanism


On the basis of the above reaction mechanism identification, we analyzed the possible reaction path of the Pt catalyst used in the selective oxidation of ethylene glycol to glycolic acid. The reaction path diagram is shown in Figure 4.C.1.


Based on the reaction path shown in Figure 4.C.1, the reaction kinetics parameters in this study are obtained by regression based on the reaction mechanism and a large number of experimental data. The detailed steps of the calculation process of the reaction kinetics parameters we performed are shown below. The detailed description is shown as follows (the description and Figure 4.C.2):

A. A flowchart depicts the step-by-step process. It starts with carrying out the detailed experimental process, feature identification, process modeling, developing the reaction model, and data generation. B. A schematic representation categories into the experimental database, feature identification, process simulation, and experimental + simulation database.

Figure 4.C.1 The reaction networks in the EGtGA process.



  1. Carry out the detailed experimental process. Complete the accumulation of original experimental data. It should be noted that in this step catalyst, material balance, etc. are considered. In this step, 610 groups of experiment data were generated.
  2. The next step is the feature identification. In this step, the purpose is to establish the database based on the data-mechanism-driven model of deep learning.
  3. Following is the process simulation (process modeling). In this procedure, Aspen Plus process simulation software is used to establish the simulation model. For more details on the process simulation of the EGtGA process, we recommend referring to our previous work (ACS Sustainable Chemistry & Engineering 2021 9 (32), 10948–10962.).
    A chemical reaction pathway starts with a molecule that contains hydroxyl groups -OH and proceeds through several steps to ultimately produce carbon dioxide C O subscript 2.

    Figure 4.C.2 The detailed flowsheet of the database generation procedure.


  4. Developing the reaction model to achieve the enhancement of mechanism-assisted database is significant in this step. Furthermore, determining reaction path and calculating reaction kinetics parameters are the most essential steps in the step of developing the reaction model.
  5. The establishment of a reaction kinetics model forms the core of the overall reaction model. Two methods for calculation of reaction kinetic parameters are shown as follows:

    1. The first method is based on the reaction mechanism, reaction path, and experimental data; the reaction kinetics parameters were regressed. In this study, the first method was adopted and, in this step, 6110 groups of simulation data were generated. The second method is based on the reaction mechanism and reaction path; the kinetic parameters of the reaction were calculated by DFT, and their theoretical values were given. Finally, it is corrected according to the experimental data.

  6. Finally, the database was generated. In this work, 6110 groups of simulation data were generated. Together with 610 groups of experiment data, a total of 6720 groups of data were obtained in this study.

The following further elaborates on the reaction kinetics regression steps used in this chapter.


4.C.1 Elimination of the Diffusion Limitations


In this part, we will use the PtMn/MCM-41 (Pt loading 1.4 wt%; Mn loading 0.9 wt%; termed as In-70) catalyst as the example. More detailed information is also shown in our previous work (Applied Catalysis B: Environmental 2021, 284, 119803). The PtMn/MCM-41 (Pt loading 1.4 wt%; Mn loading 0.9 wt%) was selected to study the internal and external mass transfer. It is ensured that all the catalysts were evaluated with negligible mass transfer limitations.



  1. Effect of external diffusion on EG oxidation

    (Reaction conditions: 60 °C, 1 MPa O2, 0.15 M EG, 0.1 g Cat.)



    1. At 700 rpm, (20 minutes) Roxidation = 0.47 kmol m−3 h−1;
    2. At 1000 rpm, (20 minutes) Roxidation = 0.51 kmol m−3 h−1;
    3. At 1200 rpm, (20 minutes) Roxidation = 0.49 kmol m−3 h−1;

  2. Interphase mass transfer limitation for oxygen

    1. Gas–liquid mass transfer limitation (J. Catal. 2016, 337, 272–283; Chem. Eng. Process, 2004, 43, 823–830; J. Catal. 2008, 257, 1–4; Three phase catalytic reactors, Ramachandran & Chaudhari, 1983; J. Chem. Eng. Data 1984, 29, 286–287):
      StartLayout 1st Row 1st Column StartFraction upper R Subscript oxidation Baseline d Subscript bubble Baseline Over 6 dot epsilon dot k Subscript g minus normal l Baseline dot normal upper C Subscript normal upper O Sub Subscript 2 comma normal b Subscript Baseline EndFraction 2nd Column equals StartFraction 0.51 left-parenthesis kmol normal m Superscript negative 3 Baseline normal h Superscript negative 1 Baseline right-parenthesis dot 0.000 002 left-parenthesis normal m right-parenthesis Over 6 dot 0.09 dot 1.44 left-parenthesis normal m normal h Superscript negative 1 Baseline right-parenthesis dot 0.0078 left-parenthesis kmol normal m Superscript negative 3 Baseline right-parenthesis EndFraction 2nd Row 1st Column Blank 2nd Column equals 1.7 times 10 Superscript negative 4 Baseline less-than 0.1 EndLayout

    2. Liquid–solid mass transfer limitation (J. Catal. 2016, 337, 272–283; Three phase catalytic reactors, Ramachandran and Chaudhari, 1983; MCM-41, assume 1 μm, see Figure 4.C.3):
      StartLayout 1st Row 1st Column StartFraction upper R Subscript oxidation Baseline rho Subscript normal p Baseline dot d Subscript normal p Baseline Over 6 dot omega Subscript c a t Baseline dot k Subscript normal l minus s Baseline dot upper C Subscript normal upper O Sub Subscript 2 comma normal b Subscript Superscript asterisk Baseline EndFraction 2nd Column equals StartFraction 0.51 left-parenthesis kmol normal m Superscript negative 3 Baseline dot normal h Superscript negative 1 Baseline right-parenthesis dot 143 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot 10 Superscript negative 6 Baseline left-parenthesis normal m right-parenthesis Over 6 dot 8 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot 72.1 left-parenthesis normal m normal h Superscript negative 1 Baseline right-parenthesis dot 0.0078 left-parenthesis kmol normal m Superscript negative 3 Baseline right-parenthesis EndFraction 2nd Row 1st Column Blank 2nd Column equals 2.7 times 10 Superscript negative 6 Baseline less-than 0.1 EndLayout

    3. Internal diffusion (J. Catal. 2016, 337, 272–283; Three phase catalytic reactors, Ramachandran and Chaudhari, 1983; Perry’s Handbook: see tables 5–16):
      StartLayout 1st Row 1st Column StartFraction upper R Subscript oxidation Baseline rho Subscript normal p Baseline dot d Subscript normal p Superscript 2 Baseline Over 4 dot omega Subscript c a t Baseline dot upper D Subscript normal e Baseline dot upper C Subscript normal upper O 2 Superscript asterisk Baseline EndFraction 2nd Column equals StartFraction 0.51 left-parenthesis kmol normal m Superscript negative 3 Baseline dot normal h Superscript negative 1 Baseline right-parenthesis dot 143 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot left-parenthesis 10 Superscript negative 6 Baseline normal m right-parenthesis squared Over 4 dot 8 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot 0.094 left-parenthesis normal m squared normal h Superscript negative 1 Baseline right-parenthesis dot 0.0078 left-parenthesis kmol normal m Superscript negative 3 Baseline right-parenthesis EndFraction 2nd Row 1st Column Blank 2nd Column equals 3.1 times 10 Superscript negative 9 Baseline less-than 1 EndLayout

  3. Interphase mass transfer limitation for glycerol

    1. Liquid–solid transfer limitation (J. Catal. 2016, 337, 272–283; AIChE J. 1980, 26, 177–201; Three phase catalytic reactors, Ramachandran and Chaudhari, 1983):
      StartLayout 1st Row 1st Column StartFraction upper R Subscript oxidation Baseline rho Subscript normal p Baseline dot d Subscript normal p Baseline Over 6 dot omega Subscript c a t Baseline dot k Subscript normal l minus s Baseline dot upper C Subscript g l y Baseline EndFraction 2nd Column equals StartFraction 0.51 left-parenthesis kmol normal m Superscript negative 3 Baseline normal h Superscript negative 1 Baseline right-parenthesis dot 143 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot 10 Superscript negative 6 Baseline left-parenthesis normal m right-parenthesis Over 6 dot 8 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot 0.6 left-parenthesis normal m normal h Superscript negative 1 Baseline right-parenthesis dot 0.15 left-parenthesis kmol normal m Superscript negative 3 Baseline right-parenthesis EndFraction 2nd Row 1st Column Blank 2nd Column equals 1.7 times 10 Superscript negative 5 Baseline less-than 0.1 EndLayout

    2. Intraparticle transfer limitation (J. Catal. 2016, 337, 272–283; AIChE J. 1980, 26, 177–201; Three phase catalytic reactors, Ramachandran & Chaudhari, 1983):
      StartLayout 1st Row 1st Column Blank 2nd Column StartFraction d Subscript normal p Baseline Over 6 EndFraction left-bracket StartFraction left-parenthesis m plus 1 right-parenthesis dot upper R Subscript oxidation Baseline rho Subscript normal p Baseline Over 2 dot omega Subscript c a t Baseline dot upper D Subscript normal e Baseline dot upper C Subscript g l y Baseline EndFraction right-bracket Superscript 0.5 2nd Row 1st Column Blank 2nd Column equals StartFraction 10 Superscript negative 6 Baseline left-parenthesis normal m right-parenthesis Over 6 EndFraction left-bracket StartFraction 0.51 left-parenthesis kmol normal m Superscript negative 3 Baseline normal h Superscript negative 1 Baseline right-parenthesis dot 143 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis Over 8 left-parenthesis k g normal m Superscript negative 3 Baseline right-parenthesis dot 0.036 left-parenthesis normal m squared normal h Superscript negative 1 Baseline right-parenthesis dot 0.15 left-parenthesis kmol normal m Superscript negative 3 Baseline right-parenthesis EndFraction right-bracket Superscript 0.5 Baseline 3rd Row 1st Column Blank 2nd Column equals 6.9 times 10 Superscript negative 6 Baseline less-than 0.2 EndLayout
A. S E M image displays the structure of M C M-41. B. A S E M image of the e Pt Mn/M C M-41 (In-70).

Figure 4.C.3 SEM images of MCM-41. The reaction order for the EG concentration (CG) (a) of the Pt/MCM-41, PtMn/MCM-41 (IM) and PtMn/MCM-41 (In-70) catalysts are 0.50, 0.44 and 0.38, respectively. This suggests that the PtMn catalysts have stronger adsorption ability for EG than monometallic Pt/MCM-41 catalyst, and the Pt-Mn2O3 interfacial active site in the PtMn/MCM-41 (In-70) catalyst further promotes the adsorption of EG, which is well consistent with the DFT calculation results. Meanwhile, the PtMn/MCM-41 (In-70) catalyst displays the minimum reaction order for O2 pressure (PO) (b) (0.26) compared with the other two catalysts, meaning that the PtMn/MCM-41 (In-70) catalyst strongly adsorbs oxygen, probably due to the relatively faster replenishment rate of the consumed surface molecular oxygen [33]/with permission of Elsevier.


4.C.2 Reaction Kinetics


As shown in Figure 4.C.4, the power function type reaction kinetic equation was employed to investigate the reaction orders for EG and O2 over the PtMn/MCM-41 (Pt loading 1.4 wt%; Mn loading 0.9 wt%, termed as In-70) and Pt/MCM-41 catalysts (Pt loading 1.3 wt%). The equation can be expressed as:

A set of three graphs depicts A. A graph of In (r mol. L power -1 h power -1) ranges from -3 to 0 versus In (C subscript G L Y mol-L power -1) ranges from -3.0 to -1.0. It represents a plot line graph with different shades and different shapes. B. A graph of In (r mol. L power -1 h power -1) ranges from 3.0 to 0.5 versus In (10 x C subscript O2 mol-L power -1) ranges from -6 to -3. It represents a plot line graph with different shades and different shapes. C. A graph of In (r mol. L power -1 h power -1) ranges from -2 to 0 versus 1000/T power -1(K) ranges from 2.80 to 3.12. It represents a plot line graph with different shades and different shapes.

Figure 4.C.4 The effects of (a) EG concentration and (b) oxygen pressure on the oxidation of EG. Reaction conditions: 25 ml of EG aqueous solution (0.15 M), 0.1 g Cat., T = 60 °C, stirring speed = 1000 rpm. (c) Apparent activation energy for EG conversion.


r equals minus StartFraction normal d upper C 0 Over normal d t EndFraction equals upper A exp left-parenthesis StartFraction negative upper E a Over italic upper R upper T EndFraction right-parenthesis upper C Subscript normal upper G Superscript a Baseline upper P Subscript normal upper O Superscript b

CEG and PO are the initial concentration of ethylene glycol and O2 pressure (mol l−1) respectively; a and b are the corresponding reaction order. r, T, A, R, and Ea are the initial reaction rate of glycerol (in mol l−1 h−1), reaction temperature (K), the pre-exponential factor, ideal gas constant (8.314 × 10−3 kJ mol−1 K−1), and activation energy (kJ mol−1). It should be noted that the concentration of oxygen at the surface equals its bulk concentration (calculated by Henry’s law) under the reaction conditions (J. Phys. Chem. C 2010, 114, 1164–1172; J. Chem. Thermodyn. 2000, 32, 1145; J. Phys. Chem. 1996, 100, 5597).


References



  1.   1 Birner, B., Severinghaus, J., Paplawsky, B., and Keeling, R. (2022). Increasing atmospheric helium due to fossil fuel exploitation. Nature Geoscience 15: 346–348.
  2.   2 Lee, K., Jing, Y., Wang, Y., and Yan, N. (2022). A unified view on catalytic conversion of biomass and waste plastics. Nature Reviews Chemistry 6: 635–652.
  3.   3 Gao, Z., Ma, B., Chen, S. et al. (2022). Converting waste PET plastics into automobile fuels and antifreeze components. Nature Communications 13: 3343–3351.
  4.   4 Cabernard, L., Pfister, S., Oberschelp, C., and Hellweg, S. (2022). Growing environmental footprint of plastics driven by coal combustion. Nature Sustainability 5: 139–148.
  5.   5 Tatrari, G., Karakoti, M., Tewari, C. et al. (2021). Solid waste-derived carbon nanomaterials for supercapacitor applications: a recent overview. Materials Advances 2: 1454–1484.
  6.   6 Wen, Z., Xie, Y., Chen, M., and Dinga, C. (2021). China’s plastic import ban increases prospects of environmental impact mitigation of plastic waste trade flow worldwide. Nature Communications 12: 425.
  7.   7 Zhou, H., Ren, Y., Li, Z. et al. (2021). Electrocatalytic upcycling of polyethylene terephthalate to commodity chemicals and H2 fuel. Nature Communications 12: 4679.
  8.   8 Maity, A., Chaudhari, S., Titman, J., and Polshettiwar, V. (2020). Catalytic nanosponges of acidic aluminosilicates for plastic degradation and CO2 to fuel conversion. Nature Communications 11: 3828.
  9.   9 Kwon, D., Jung, S., Moon, D. et al. (2022). Strategic management of harmful chemicals produced from pyrolysis of plastic cup waste using CO2 as a reaction medium. Chemical Engineering Journal 437: 135524.
  10. 10 Zhuang, Z., Mohamed, B., Li, L., and Swei, O. (2022). An economic and global warming impact assessment of common sewage sludge treatment processes in North America. Journal of Cleaner Production 370: 133539.
  11. 11 Ribeiro, F., Nascimento, F., and Silva, M. (2022). Environmental performance analysis of railway infrastructure using life cycle assessment: selecting pavement projects based on global warming potential impacts. Journal of Cleaner Production 365: 132558.
  12. 12 Lee, J., Bhagwat, S., Kuanyshev, N. et al. (2023). Rewiring yeast metabolism for producing 2,3-butanediol and two downstream applications: techno-economic analysis and life cycle assessment of methyl ethyl ketone (MEK) and agricultural biostimulant production. Chemical Engineering Journal 451: 138886.
  13. 13 Feist, J., Lee, D., and Xia, Y. (2022). A versatile approach for the synthesis of degradable polymers via controlled ring-opening metathesis copolymerization. Nature Chemistry 14: 53–58.
  14. 14 Yang, L., Wang, Y., Yuan, J. et al. (2022). Construction of covalent-integrated MOFs@COFs composite material for efficient synergistic adsorption and degradation of pollutants. Chemical Engineering Journal 446: 137095.
  15. 15 Tang, X., Ma, S., Xu, S. et al. (2023). Effects of different pretreatment strategies during porous carbonaceous materials fabrication on their peroxydisulfate activation for organic pollutant degradation: focus on mechanism. Chemical Engineering Journal 451: 138576.
  16. 16 Zhou, N., Dai, L., Lyu, Y. et al. (2022). A structured catalyst of ZSM-5/SiC foam for chemical recycling of waste plastics via catalytic pyrolysis. Chemical Engineering Journal 440: 135836.
  17. 17 Yue, X., Zhang, F., Wu, L. et al. (2022). Upcycling of blending waste plastics as flexible growing substrate with superabsorbing property. Chemical Engineering Journal 435: 134622.
  18. 18 Li, T., Tan, Z., Tang, Z. et al. (2022). One-pot chemoenzymatic synthesis of glycolic acid from formaldehyde. Green Chemistry 24: 5064–5069.
  19. 19 Samantaray, P., Little, A., Haddleton, D. et al. (2020). Poly(glycolic acid) (PGA): a versatile building block expanding high performance and sustainable bioplastic applications. Green Chemistry 22: 4055–4081.
  20. 20 Jem, K. and Tan, B. (2020). The development and challenges of poly(lactic acid) and poly(glycolic acid). Advanced Industrial and Engineering Polymer Research 3: 60–70.
  21. 21 Niu, D., Xu, P., Sun, Z. et al. (2021). Superior toughened bio-compostable poly(glycolic acid)-based blends with enhanced melt strength via selective interfacial localization of in-situ grafted copolymers. Polymer 235: 124269.
  22. 22 Yeo, T., Ko, Y., Kim, E. et al. (2021). Promoting bone regeneration by 3D-printed poly(glycolic acid)/hydroxyapatite composite scaffolds. Journal of Industrial and Engineering Chemistry 94: 343–351.
  23. 23 Yan, H., Wang, Z., Li, L. et al. (2021). DOPA-derived electroactive copolymer and IGF-1 immobilized poly(lactic-co-glycolic acid)/hydroxyapatite biodegradable microspheres for synergistic bone repair. Chemical Engineering Journal 416: 129129.
  24. 24 Kim, B., Ko, Y., Yeo, T. et al. (2019). Guided regeneration of rabbit calvarial defects using silk fibroin nanofiber-poly(glycolic acid) hybrid scaffolds. ACS Biomaterials Science & Engineering 5: 5266–5272.
  25. 25 Zhu, T., Yao, D., Li, D. et al. (2021). Multiple strategies for metabolic engineering of Escherichia coli for efficient production of glycolate. Biotechnology and Bioengineering 118: 4699–4707.
  26. 26 Bioresorbable Polymers Market by Type (Polylactic acid (PLA), Polyglycolic acid (PGA), Polylactic-co-glycolic acid (PLGA), Polycaprolactone(PCL)), application (orthopedic devices, drug delivery), and Region – Global Forecast to 2027. https://www.marketsandmarkets.com/Market-Reports/bioresorbable-polymer-market-235258717.html (accessed Jan 2023).
  27. 27 Global Polyglycolic Acid (PGA) Market Report By Form (Fibers, Films, Others (Plate, Rod, and Composites)), By End-use Industry (Medical, Oil & Gas, Packaging, Others (Civil Engineering, Agriculture, and Filter)) And By Regions – Industry Trends, Size, Share, Growth, Estimation and Forecast, 2023-2032. https://www.valuemarketresearch.com/report/polyglycolic-acid-pga-market (accessed July 2024).
  28. 28 Yu, Q., Hua, X., Zhou, X. et al. (2022). A roundabout strategy for high-purity glycolic acid biopreparation via a resting cell bio-oxidation catalysis of ethylene glycol. Green Chemistry 24: 5142–5150.
  29. 29 Huang, Q., Xie, C., Li, Y. et al. (2017). Thermodynamic equilibrium of hydroxyacetic acid in pure and binary solvent systems. The Journal of Chemical Thermodynamics 108: 76–83.
  30. 30 Shi, Y., Sun, H., Cao, H. et al. (2011). Synergistic extraction of glycolic acid from glycolonitrile hydrolysate. Industrial and Engineering Chemistry Research 50: 8216–8224.
  31. 31 Zhao, D., Zhua, T., Lia, J. et al. (2021). Poly(lactic-co-glycolic acid)-based composite bone-substitute materials. Bioactive Materials 6: 346–360.
  32. 32 Lee, S., Kim, J., Lee, J., and Kim, Y. (1993). Carbonylation of formaldehyde over ion exchange resin catalysts. 1. Batch reactor studies. Industrial and Engineering Chemistry Research 32: 253–259.
  33. 33 Yan, H., Yao, S., Wang, J. et al. (2021). Engineering Pt–Mn2O3 interface to boost selective oxidation of ethylene glycol to glycolic acid. Applied Catalysis B: Environmental 284: 119803.
  34. 34 Yan, H., Zhao, M., Feng, X. et al. (2022). PO43− coordinated robust single-atom platinum catalyst for selective polyol oxidation. Angewandte Chemie, International Edition 61: e202116059.
  35. 35 Zhou, X., Zha, M., Cao, J. et al. (2021). Glycolic acid production from ethylene glycol via sustainable biomass energy: integrated conceptual process design and comparative techno-economic–society–environment analysis. ACS Sustainable Chemistry & Engineering 9: 10948–10962.
  36. 36 Xu, S., Xiao, Y., Zhang, W. et al. (2022). Relay catalysis of copper-magnesium catalyst on efficient valorization of glycerol to glycolic acid. Chemical Engineering Journal 428: 132555.
  37. 37 Kang, N., Kim, M., Baek, K. et al. (2022). Photoautotrophic organic acid production: glycolic acid production by microalgal cultivation. Chemical Engineering Journal 433: 133636.
  38. 38 Kim, D., Oh, L., Tan, Y. et al. (2021). Enhancing glycerol conversion and selectivity toward glycolic acid via precise nanostructuring of electrocatalysts. ACS Catalysis 11: 14926–14931.
  39. 39 Kim, H., Kim, Y., Lee, D. et al. (2017). Coproducing value-added chemicals and hydrogen with electrocatalytic glycerol oxidation technology: experimental and techno-economic investigations. ACS Sustainable Chemistry & Engineering 5: 6626–6634.
  40. 40 Silver, D., Huang, A., Maddison, C. et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature 529: 484–489.
  41. 41 Yang, H., Li, J., Lim, K. et al. (2022). Automatic strain sensor design via active learning and data augmentation for soft machines. Nature Machine Intelligence 4: 84–94.
  42. 42 Li, J., Zhang, L., Li, C. et al. (2022). Data-driven based in-depth interpretation and inverse design of anaerobic digestion for CH4-rich biogas production. ACS ES&T Engineering 2: 642–652.
  43. 43 Leonard, K., Hasan, F., Sneddon, H., and You, F. (2021). Can artificial intelligence and machine learning be used to accelerate sustainable chemistry and engineering? ACS Sustainable Chemistry & Engineering 9: 6126–6129.
  44. 44 Li, J., Zhang, W., Liu, T. et al. (2021). Machine learning aided bio-oil production with high energy recovery and low nitrogen content from hydrothermal liquefaction of biomass with experiment verification. Chemical Engineering Journal 425: 130649.
  45. 45 Chakkingal, A., Janssens, P., Poissonnier, J. et al. (2022). Multi-output machine learning models for kinetic data evaluation: a Fischer–Tropsch synthesis case study. Chemical Engineering Journal 446: 137186.
  46. 46 Shi, S. and Xu, G. (2018). Novel performance prediction model of a biofilm system treating domestic wastewater based on stacked denoising auto-encoders deep learning network. Chemical Engineering Journal 347: 280–290.
  47. 47 Wu, Z., Rincon, D., Luo, J., and Christofides, P. (2021). Machine learning modeling and predictive control of nonlinear processes using noisy data. AIChE Journal 67: e17164.
  48. 48 Zhan, N. and Kitchin, J. (2022). Uncertainty quantification in machine learning and nonlinear least squares regression models. AIChE Journal 68: e17516.
  49. 49 Wang, Z., Su, Y., Shen, W. et al. (2019). Predictive deep learning models for environmental properties: the direct calculation of octanol–water partition coefficients from molecular graphs. Green Chemistry 21: 4555–4565.
  50. 50 Yang, A., Su, Y., Wang, Z. et al. (2021). A multi-task deep learning neural network for predicting flammability-related properties from molecular structures. Green Chemistry 23: 4451–4465.
  51. 51 Ouyang, Y., Vandewalle, L., Chen, L. et al. (2022). Speeding up turbulent reactive flow simulation via a deep artificial neural network: a methodology study. Chemical Engineering Journal 429: 132442.
  52. 52 Hinton, G. and Osindero, S. (2006). A fast learning algorithm for deep belief nets. Neural Computation 18: 527–1554.
  53. 53 He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. IEEE. pp. 770–778.
  54. 54 Cheng, F., Belden, E., Li, W. et al. (2022). Accuracy of predictions made by machine learned models for biocrude yields obtained from hydrothermal liquefaction of organic wastes. Chemical Engineering Journal 442: 136013.
  55. 55 Lira, J., Riella, H., Padoin, N., and Soares, C. (2022). Computational fluid dynamics (CFD), artificial neural network (ANN) and genetic algorithm (GA) as a hybrid method for the analysis and optimization of micro-photocatalytic reactors: NOx abatement as a case study. Chemical Engineering Journal 431: 133771.
  56. 56 Kabak, E., Cagcag, Y., Aydın, T., and Turan, N. (2022). Prediction and optimization of nitrogen losses in co-composting process by using a hybrid cascaded prediction model and genetic algorithm. Chemical Engineering Journal 437: 135499.
  57. 57 Tao, Y., Rahn, C.D., Archer, L.A., and You, F. (2021). Second life and recycling: energy and environmental sustainability perspectives for high-performance lithium-ion batteries. Science Advances 7 (45): eabi7633.
  58. 58 Tian, X., Stranks, S., and You, F. (2021). Life cycle assessment of recycling strategies for perovskite photovoltaic modules. Nature Sustainability 4: 821–829.
  59. 59 Tao, Y., Steckel, D., Klemeš, J., and You, F. (2021). Trend towards virtual and hybrid conferences may be an effective climate change mitigation strategy. Nature Communications 12: 7324.
  60. 60 Niaz, H., Shams, M., Liu, J., and You, F. (2022). Mining bitcoins with carbon capture and renewable energy for carbon neutrality across states in the USA. Energy & Environmental Science 15: 3551–3570.
  61. 61 Zhou, X., Yan, H., Feng, X. et al. (2021). Producing glyceric acid from glycerol via integrating vacuum dividing wall columns: conceptual process design and techno-economic-environmental analysis. Green Chemistry 23: 3664–3676.
  62. 62 Zhou, X., Yang, Q., Yang, S. et al. (2022). One-step leap in achieving oil-to-chemicals by using a two-stage riser reactor: molecular-level process model and multi-objective optimization strategy. Chemical Engineering Journal 444: 136684.
  63. 63 Zhou, X., Li, Z., Feng, X. et al. (2023). A hybrid deep learning framework driven by data and reaction mechanism for predicting sustainable glycolic acid production performance. AIChE Journal 69 (7): e18083.

May 11, 2025 | Posted by in General Engineer | Comments Off on Integration of Observed Data and Reaction Mechanisms in Deep Learning for Designing Sustainable Glycolic Acid
Premium Wordpress Themes by UFO Themes