Commit 05180ced475c1b3be66c3681ceefc537ba7f5514

Authored by dsotofor
1 parent fc9e19ce1d
Exists in main

version all ok...

Showing 6 changed files with 5 additions and 5 deletions Inline Diff

\begin{thebibliography}{} 1 1 \begin{thebibliography}{}
2 2
\bibitem[Dat, 2023]{Data} 3 3 \bibitem[Dat, 2023]{Data}
(2023). 4 4 (2023).
\newblock Jeu de données. 5 5 \newblock Jeux de données.
\newblock 6 6 \newblock
\url{https://disc.univ-fcomte.fr/gitlab/daniel.soto_forero/ai-vt-recommender-system}. 7 7 \url{https://disc.univ-fcomte.fr/gitlab/daniel.soto_forero/ai-vt-recommender-system}.
\newblock Accessed: 2023-11-20. 8 8 \newblock Accessed: 2023-11-20.
9 9
\bibitem[UCI, 2024]{UCI} 10 10 \bibitem[UCI, 2024]{UCI}
(2024). 11 11 (2024).
\newblock Markelle kelly, rachel longjohn, kolby nottingham, the uci machine 12 12 \newblock Markelle kelly, rachel longjohn, kolby nottingham, the uci machine
learning repository. 13 13 learning repository.
\newblock \url{https://archive.ics.uci.edu}. 14 14 \newblock \url{https://archive.ics.uci.edu}.
\newblock Accessed: 2024-09-30. 15 15 \newblock Accessed: 2024-09-30.
16 16
\bibitem[Aamodt and Plaza, 1994]{doi:10.3233/AIC-1994-7104} 17 17 \bibitem[Aamodt and Plaza, 1994]{doi:10.3233/AIC-1994-7104}
Aamodt, A. and Plaza, E. (1994). 18 18 Aamodt, A. and Plaza, E. (1994).
\newblock Case-based reasoning: Foundational issues, methodological variations, 19 19 \newblock Case-based reasoning: Foundational issues, methodological variations,
and system approaches. 20 20 and system approaches.
\newblock {\em AI Communications}, 7(1):39--59. 21 21 \newblock {\em AI Communications}, 7(1):39--59.
22 22
\bibitem[Abel et~al., 2023]{NEURIPS2023_9d8cf124} 23 23 \bibitem[Abel et~al., 2023]{NEURIPS2023_9d8cf124}
Abel, D., Barreto, A., Van~Roy, B., Precup, D., van Hasselt, H.~P., and Singh, 24 24 Abel, D., Barreto, A., Van~Roy, B., Precup, D., van Hasselt, H.~P., and Singh,
S. (2023). 25 25 S. (2023).
\newblock A definition of continual reinforcement learning. 26 26 \newblock A definition of continual reinforcement learning.
\newblock In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and 27 27 \newblock In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and
Levine, S., editors, {\em Advances in Neural Information Processing Systems}, 28 28 Levine, S., editors, {\em Advances in Neural Information Processing Systems},
volume~36, pages 50377--50407. Curran Associates, Inc. 29 29 volume~36, pages 50377--50407. Curran Associates, Inc.
30 30
\bibitem[Alabdulrahman and Viktor, 2021]{ALABDULRAHMAN2021114061} 31 31 \bibitem[Alabdulrahman and Viktor, 2021]{ALABDULRAHMAN2021114061}
Alabdulrahman, R. and Viktor, H. (2021). 32 32 Alabdulrahman, R. and Viktor, H. (2021).
\newblock Catering for unique tastes: Targeting grey-sheep users recommender 33 33 \newblock Catering for unique tastes: Targeting grey-sheep users recommender
systems through one-class machine learning. 34 34 systems through one-class machine learning.
\newblock {\em Expert Systems with Applications}, 166:114061. 35 35 \newblock {\em Expert Systems with Applications}, 166:114061.
36 36
\bibitem[Arthurs et~al., 2019]{Arthurs} 37 37 \bibitem[Arthurs et~al., 2019]{Arthurs}
Arthurs, N., Stenhaug, B., Karayev, S., and Piech, C. (2019). 38 38 Arthurs, N., Stenhaug, B., Karayev, S., and Piech, C. (2019).
\newblock Grades are not normal: Improving exam score models using the 39 39 \newblock Grades are not normal: Improving exam score models using the
logit-normal distribution. 40 40 logit-normal distribution.
\newblock In {\em International Conference on Educational Data Mining (EDM)}, 41 41 \newblock In {\em International Conference on Educational Data Mining (EDM)},
page~6. 42 42 page~6.
43 43
\bibitem[Auer et~al., 2021]{Auer} 44 44 \bibitem[Auer et~al., 2021]{Auer}
Auer, F., Lenarduzzi, V., Felderer, M., and Taibi, D. (2021). 45 45 Auer, F., Lenarduzzi, V., Felderer, M., and Taibi, D. (2021).
\newblock From monolithic systems to microservices: An assessment framework. 46 46 \newblock From monolithic systems to microservices: An assessment framework.
\newblock {\em Information and Software Technology}, 137:106600. 47 47 \newblock {\em Information and Software Technology}, 137:106600.
48 48
\bibitem[Badier et~al., 2023]{badier:hal-04092828} 49 49 \bibitem[Badier et~al., 2023]{badier:hal-04092828}
Badier, A., Lefort, M., and Lefevre, M. (2023). 50 50 Badier, A., Lefort, M., and Lefevre, M. (2023).
\newblock {Comprendre les usages et effets d'un syst{\`e}me de recommandations 51 51 \newblock {Comprendre les usages et effets d'un syst{\`e}me de recommandations
p{\'e}dagogiques en contexte d'apprentissage non-formel}. 52 52 p{\'e}dagogiques en contexte d'apprentissage non-formel}.
\newblock In {\em {EIAH'23}}, Brest, France. 53 53 \newblock In {\em {EIAH'23}}, Brest, France.
54 54
\bibitem[Bakurov et~al., 2021]{BAKUROV2021100913} 55 55 \bibitem[Bakurov et~al., 2021]{BAKUROV2021100913}
Bakurov, I., Castelli, M., Gau, O., Fontanella, F., and Vanneschi, L. (2021). 56 56 Bakurov, I., Castelli, M., Gau, O., Fontanella, F., and Vanneschi, L. (2021).
\newblock Genetic programming for stacked generalization. 57 57 \newblock Genetic programming for stacked generalization.
\newblock {\em Swarm and Evolutionary Computation}, 65:100913. 58 58 \newblock {\em Swarm and Evolutionary Computation}, 65:100913.
59 59
\bibitem[Busch et~al., 2023]{busch2023teaching} 60 60 \bibitem[Busch et~al., 2023]{busch2023teaching}
Busch, B., Watson, E., and Bogatchek, L. (2023). 61 61 Busch, B., Watson, E., and Bogatchek, L. (2023).
\newblock {\em Teaching and Learning Illuminated: The Big Ideas, Illustrated}. 62 62 \newblock {\em Teaching and Learning Illuminated: The Big Ideas, Illustrated}.
\newblock Taylor and Francis. 63 63 \newblock Taylor and Francis.
64 64
\bibitem[Butdee and Tichkiewitch, 2011]{10.1007/978-3-642-15973-2_50} 65 65 \bibitem[Butdee and Tichkiewitch, 2011]{10.1007/978-3-642-15973-2_50}
Butdee, S. and Tichkiewitch, S. (2011). 66 66 Butdee, S. and Tichkiewitch, S. (2011).
\newblock Case-based reasoning for adaptive aluminum extrusion die design 67 67 \newblock Case-based reasoning for adaptive aluminum extrusion die design
together with parameters by neural networks. 68 68 together with parameters by neural networks.
\newblock In Bernard, A., editor, {\em Global Product Development}, pages 69 69 \newblock In Bernard, A., editor, {\em Global Product Development}, pages
491--496, Berlin, Heidelberg. Springer Berlin Heidelberg. 70 70 491--496, Berlin, Heidelberg. Springer Berlin Heidelberg.
71 71
\bibitem[Chen et~al., 2025]{CHEN2025104070} 72 72 \bibitem[Chen et~al., 2025]{CHEN2025104070}
Chen, H., Feng, Z., Chen, S., Wu, H., Sun, Y., Li, J., Gao, Q., Zhang, L., and 73 73 Chen, H., Feng, Z., Chen, S., Wu, H., Sun, Y., Li, J., Gao, Q., Zhang, L., and
Xue, X. (2025). 74 74 Xue, X. (2025).
\newblock Incorporating forgetting curve and memory replay for evolving 75 75 \newblock Incorporating forgetting curve and memory replay for evolving
socially-aware recommendation. 76 76 socially-aware recommendation.
\newblock {\em Information Processing and Management}, 62(3):104070. 77 77 \newblock {\em Information Processing and Management}, 62(3):104070.
78 78
\bibitem[Chiu et~al., 2023]{CHIU2023100118} 79 79 \bibitem[Chiu et~al., 2023]{CHIU2023100118}
Chiu, T.~K., Xia, Q., Zhou, X., Chai, C.~S., and Cheng, M. (2023). 80 80 Chiu, T.~K., Xia, Q., Zhou, X., Chai, C.~S., and Cheng, M. (2023).
\newblock Systematic literature review on opportunities, challenges, and future 81 81 \newblock Systematic literature review on opportunities, challenges, and future
research recommendations of artificial intelligence in education. 82 82 research recommendations of artificial intelligence in education.
\newblock {\em Computers and Education: Artificial Intelligence}, 4:100118. 83 83 \newblock {\em Computers and Education: Artificial Intelligence}, 4:100118.
84 84
\bibitem[Choi et~al., 2023]{cmc.2023.033417} 85 85 \bibitem[Choi et~al., 2023]{cmc.2023.033417}
Choi, J., Suh, D., and Otto, M.-O. (2023). 86 86 Choi, J., Suh, D., and Otto, M.-O. (2023).
\newblock Boosted stacking ensemble machine learning method for wafer map 87 87 \newblock Boosted stacking ensemble machine learning method for wafer map
pattern classification. 88 88 pattern classification.
\newblock {\em Computers, Materials \& Continua}, 74(2):2945--2966. 89 89 \newblock {\em Computers, Materials \& Continua}, 74(2):2945--2966.
90 90
\bibitem[C.K. and R.C., 1989]{Riesbeck1989} 91 91 \bibitem[C.K. and R.C., 1989]{Riesbeck1989}
C.K., R. and R.C., S. (1989). 92 92 C.K., R. and R.C., S. (1989).
\newblock {\em Inside Case-Based Reasoning}. 93 93 \newblock {\em Inside Case-Based Reasoning}.
\newblock Psychology Press. 94 94 \newblock Psychology Press.
95 95
\bibitem[Cunningham and Delany, 2021]{10.1145/3459665} 96 96 \bibitem[Cunningham and Delany, 2021]{10.1145/3459665}
Cunningham, P. and Delany, S.~J. (2021). 97 97 Cunningham, P. and Delany, S.~J. (2021).
\newblock K-nearest neighbour classifiers - a tutorial. 98 98 \newblock K-nearest neighbour classifiers - a tutorial.
\newblock {\em ACM Comput. Surv.}, 54(6). 99 99 \newblock {\em ACM Comput. Surv.}, 54(6).
100 100
\bibitem[Didden et~al., 2023]{DIDDEN2023338} 101 101 \bibitem[Didden et~al., 2023]{DIDDEN2023338}
Didden, J.~B., Dang, Q.-V., and Adan, I.~J. (2023). 102 102 Didden, J.~B., Dang, Q.-V., and Adan, I.~J. (2023).
\newblock Decentralized learning multi-agent system for online machine shop 103 103 \newblock Decentralized learning multi-agent system for online machine shop
scheduling problem. 104 104 scheduling problem.
\newblock {\em Journal of Manufacturing Systems}, 67:338--360. 105 105 \newblock {\em Journal of Manufacturing Systems}, 67:338--360.
106 106
\bibitem[Ezaldeen et~al., 2022]{EZALDEEN2022100700} 107 107 \bibitem[Ezaldeen et~al., 2022]{EZALDEEN2022100700}
Ezaldeen, H., Misra, R., Bisoy, S.~K., Alatrash, R., and Priyadarshini, R. 108 108 Ezaldeen, H., Misra, R., Bisoy, S.~K., Alatrash, R., and Priyadarshini, R.
(2022). 109 109 (2022).
\newblock A hybrid e-learning recommendation integrating adaptive profiling and 110 110 \newblock A hybrid e-learning recommendation integrating adaptive profiling and
sentiment analysis. 111 111 sentiment analysis.
\newblock {\em Journal of Web Semantics}, 72:100700. 112 112 \newblock {\em Journal of Web Semantics}, 72:100700.
113 113
\bibitem[Feely et~al., 2020]{10.1007/978-3-030-58342-2_5} 114 114 \bibitem[Feely et~al., 2020]{10.1007/978-3-030-58342-2_5}
Feely, C., Caulfield, B., Lawlor, A., and Smyth, B. (2020). 115 115 Feely, C., Caulfield, B., Lawlor, A., and Smyth, B. (2020).
\newblock Using case-based reasoning to predict marathon performance and 116 116 \newblock Using case-based reasoning to predict marathon performance and
recommend tailored training plans. 117 117 recommend tailored training plans.
\newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning 118 118 \newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning
Research and Development}, pages 67--81, Cham. Springer International 119 119 Research and Development}, pages 67--81, Cham. Springer International
Publishing. 120 120 Publishing.
121 121
\bibitem[Grace et~al., 2016]{10.1007/978-3-319-47096-2_11} 122 122 \bibitem[Grace et~al., 2016]{10.1007/978-3-319-47096-2_11}
Grace, K., Maher, M.~L., Wilson, D.~C., and Najjar, N.~A. (2016). 123 123 Grace, K., Maher, M.~L., Wilson, D.~C., and Najjar, N.~A. (2016).
\newblock Combining cbr and deep learning to generate surprising recipe 124 124 \newblock Combining cbr and deep learning to generate surprising recipe
designs. 125 125 designs.
\newblock In Goel, A., D{\'i}az-Agudo, M.~B., and Roth-Berghofer, T., editors, 126 126 \newblock In Goel, A., D{\'i}az-Agudo, M.~B., and Roth-Berghofer, T., editors,
{\em Case-Based Reasoning Research and Development}, pages 154--169, Cham. 127 127 {\em Case-Based Reasoning Research and Development}, pages 154--169, Cham.
Springer International Publishing. 128 128 Springer International Publishing.
129 129
\bibitem[Gupta et~al., 2021]{9434422} 130 130 \bibitem[Gupta et~al., 2021]{9434422}
Gupta, S., Chaudhari, S., Joshi, G., and Yağan, O. (2021). 131 131 Gupta, S., Chaudhari, S., Joshi, G., and Yağan, O. (2021).
\newblock Multi-armed bandits with correlated arms. 132 132 \newblock Multi-armed bandits with correlated arms.
\newblock {\em IEEE Transactions on Information Theory}, 67(10):6711--6732. 133 133 \newblock {\em IEEE Transactions on Information Theory}, 67(10):6711--6732.
134 134
\bibitem[Hajduk et~al., 2019]{hajduk2019cognitive} 135 135 \bibitem[Hajduk et~al., 2019]{hajduk2019cognitive}
Hajduk, M., Sukop, M., and Haun, M. (2019). 136 136 Hajduk, M., Sukop, M., and Haun, M. (2019).
\newblock {\em Cognitive Multi-agent Systems: Structures, Strategies and 137 137 \newblock {\em Cognitive Multi-agent Systems: Structures, Strategies and
Applications to Mobile Robotics and Robosoccer}. 138 138 Applications to Mobile Robotics and Robosoccer}.
\newblock Studies in Systems, Decision and Control. Springer International 139 139 \newblock Studies in Systems, Decision and Control. Springer International
Publishing. 140 140 Publishing.
141 141
\bibitem[Henriet et~al., 2017]{doi:10.1177/1754337116651013} 142 142 \bibitem[Henriet et~al., 2017]{doi:10.1177/1754337116651013}
Henriet, J., Christophe, L., and Laurent, P. (2017). 143 143 Henriet, J., Christophe, L., and Laurent, P. (2017).
\newblock Artificial intelligence-virtual trainer: An educative system based on 144 144 \newblock Artificial intelligence-virtual trainer: An educative system based on
artificial intelligence and designed to produce varied and consistent 145 145 artificial intelligence and designed to produce varied and consistent
training lessons. 146 146 training lessons.
\newblock {\em Proceedings of the Institution of Mechanical Engineers, Part P: 147 147 \newblock {\em Proceedings of the Institution of Mechanical Engineers, Part P:
Journal of Sports Engineering and Technology}, 231(2):110--124. 148 148 Journal of Sports Engineering and Technology}, 231(2):110--124.
149 149
\bibitem[Henriet and Greffier, 2018]{10.1007/978-3-030-01081-2_9} 150 150 \bibitem[Henriet and Greffier, 2018]{10.1007/978-3-030-01081-2_9}
Henriet, J. and Greffier, F. (2018). 151 151 Henriet, J. and Greffier, F. (2018).
\newblock Ai-vt: An example of cbr that generates a variety of solutions to the 152 152 \newblock Ai-vt: An example of cbr that generates a variety of solutions to the
same problem. 153 153 same problem.
\newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based 154 154 \newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based
Reasoning Research and Development}, pages 124--139, Cham. Springer 155 155 Reasoning Research and Development}, pages 124--139, Cham. Springer
International Publishing. 156 156 International Publishing.
157 157
\bibitem[Hipólito and Kirchhoff, 2023]{HIPOLITO2023103510} 158 158 \bibitem[Hipólito and Kirchhoff, 2023]{HIPOLITO2023103510}
Hipólito, I. and Kirchhoff, M. (2023). 159 159 Hipólito, I. and Kirchhoff, M. (2023).
\newblock Breaking boundaries: The bayesian brain hypothesis for perception and 160 160 \newblock Breaking boundaries: The bayesian brain hypothesis for perception and
prediction. 161 161 prediction.
\newblock {\em Consciousness and Cognition}, 111:103510. 162 162 \newblock {\em Consciousness and Cognition}, 111:103510.
163 163
\bibitem[Hoang, 2018]{Hoang} 164 164 \bibitem[Hoang, 2018]{Hoang}
Hoang, L. (2018). 165 165 Hoang, L. (2018).
\newblock {\em La formule du savoir. Une philosophie unifiée du savoir fondée 166 166 \newblock {\em La formule du savoir. Une philosophie unifiée du savoir fondée
sur le théorème de Bayes}. 167 167 sur le théorème de Bayes}.
\newblock EDP Sciences. 168 168 \newblock EDP Sciences.
169 169
\bibitem[Hu et~al., 2025]{HU2025127130} 170 170 \bibitem[Hu et~al., 2025]{HU2025127130}
Hu, B., Ma, Y., Liu, Z., and Wang, H. (2025). 171 171 Hu, B., Ma, Y., Liu, Z., and Wang, H. (2025).
\newblock A social importance and category enhanced cold-start user 172 172 \newblock A social importance and category enhanced cold-start user
recommendation system. 173 173 recommendation system.
\newblock {\em Expert Systems with Applications}, 277:127130. 174 174 \newblock {\em Expert Systems with Applications}, 277:127130.
175 175
\bibitem[Huang et~al., 2023]{HUANG2023104684} 176 176 \bibitem[Huang et~al., 2023]{HUANG2023104684}
Huang, A.~Y., Lu, O.~H., and Yang, S.~J. (2023). 177 177 Huang, A.~Y., Lu, O.~H., and Yang, S.~J. (2023).
\newblock Effects of artificial intelligence–enabled personalized 178 178 \newblock Effects of artificial intelligence–enabled personalized
recommendations on learners’ learning engagement, motivation, and outcomes 179 179 recommendations on learners’ learning engagement, motivation, and outcomes
in a flipped classroom. 180 180 in a flipped classroom.
\newblock {\em Computers and Education}, 194:104684. 181 181 \newblock {\em Computers and Education}, 194:104684.
182 182
\bibitem[Ingkavara et~al., 2022]{INGKAVARA2022100086} 183 183 \bibitem[Ingkavara et~al., 2022]{INGKAVARA2022100086}
Ingkavara, T., Panjaburee, P., Srisawasdi, N., and Sajjapanroj, S. (2022). 184 184 Ingkavara, T., Panjaburee, P., Srisawasdi, N., and Sajjapanroj, S. (2022).
\newblock The use of a personalized learning approach to implementing 185 185 \newblock The use of a personalized learning approach to implementing
self-regulated online learning. 186 186 self-regulated online learning.
\newblock {\em Computers and Education: Artificial Intelligence}, 3:100086. 187 187 \newblock {\em Computers and Education: Artificial Intelligence}, 3:100086.
188 188
\bibitem[Jean-Daubias, 2011]{Daubias2011} 189 189 \bibitem[Jean-Daubias, 2011]{Daubias2011}
Jean-Daubias, S. (2011). 190 190 Jean-Daubias, S. (2011).
\newblock Ingénierie des profils d'apprenants. 191 191 \newblock Ingénierie des profils d'apprenants.
192 192
\bibitem[Jung et~al., 2009]{JUNG20095695} 193 193 \bibitem[Jung et~al., 2009]{JUNG20095695}
Jung, S., Lim, T., and Kim, D. (2009). 194 194 Jung, S., Lim, T., and Kim, D. (2009).
\newblock Integrating radial basis function networks with case-based reasoning 195 195 \newblock Integrating radial basis function networks with case-based reasoning
for product design. 196 196 for product design.
\newblock {\em Expert Systems with Applications}, 36(3, Part 1):5695--5701. 197 197 \newblock {\em Expert Systems with Applications}, 36(3, Part 1):5695--5701.
198 198
\bibitem[Kamali et~al., 2023]{KAMALI2023110242} 199 199 \bibitem[Kamali et~al., 2023]{KAMALI2023110242}
Kamali, S.~R., Banirostam, T., Motameni, H., and Teshnehlab, M. (2023). 200 200 Kamali, S.~R., Banirostam, T., Motameni, H., and Teshnehlab, M. (2023).
\newblock An immune inspired multi-agent system for dynamic multi-objective 201 201 \newblock An immune inspired multi-agent system for dynamic multi-objective
optimization. 202 202 optimization.
\newblock {\em Knowledge-Based Systems}, 262:110242. 203 203 \newblock {\em Knowledge-Based Systems}, 262:110242.
204 204
\bibitem[Kim, 2024]{Kim2024} 205 205 \bibitem[Kim, 2024]{Kim2024}
Kim, W. (2024). 206 206 Kim, W. (2024).
\newblock A random focusing method with jensen--shannon divergence for 207 207 \newblock A random focusing method with jensen--shannon divergence for
improving deep neural network performance ensuring architecture consistency. 208 208 improving deep neural network performance ensuring architecture consistency.
\newblock {\em Neural Processing Letters}, 56(4):199. 209 209 \newblock {\em Neural Processing Letters}, 56(4):199.
210 210
\bibitem[Kolodner, 1983]{KOLODNER1983281} 211 211 \bibitem[Kolodner, 1983]{KOLODNER1983281}
Kolodner, J.~L. (1983). 212 212 Kolodner, J.~L. (1983).
\newblock Reconstructive memory: A computer model. 213 213 \newblock Reconstructive memory: A computer model.
\newblock {\em Cognitive Science}, 7(4):281--328. 214 214 \newblock {\em Cognitive Science}, 7(4):281--328.
215 215
\bibitem[Kuzilek et~al., 2017]{Kuzilek2017} 216 216 \bibitem[Kuzilek et~al., 2017]{Kuzilek2017}
Kuzilek, J., Hlosta, M., and Zdrahal, Z. (2017). 217 217 Kuzilek, J., Hlosta, M., and Zdrahal, Z. (2017).
\newblock Open university learning analytics dataset. 218 218 \newblock Open university learning analytics dataset.
\newblock {\em Scientific Data}, 4(1):170171. 219 219 \newblock {\em Scientific Data}, 4(1):170171.
220 220
\bibitem[Lalitha and Sreeja, 2020]{LALITHA2020583} 221 221 \bibitem[Lalitha and Sreeja, 2020]{LALITHA2020583}
Lalitha, T.~B. and Sreeja, P.~S. (2020). 222 222 Lalitha, T.~B. and Sreeja, P.~S. (2020).
\newblock Personalised self-directed learning recommendation system. 223 223 \newblock Personalised self-directed learning recommendation system.
\newblock {\em Procedia Computer Science}, 171:583--592. 224 224 \newblock {\em Procedia Computer Science}, 171:583--592.
\newblock Third International Conference on Computing and Network 225 225 \newblock Third International Conference on Computing and Network
Communications (CoCoNet'19). 226 226 Communications (CoCoNet'19).
227 227
\bibitem[Lei, 2024]{lei2024analysis} 228 228 \bibitem[Lei, 2024]{lei2024analysis}
Lei, Z. (2024). 229 229 Lei, Z. (2024).
\newblock Analysis of simpson’s paradox and its applications. 230 230 \newblock Analysis of simpson’s paradox and its applications.
\newblock {\em Highlights in Science, Engineering and Technology}, 88:357--362. 231 231 \newblock {\em Highlights in Science, Engineering and Technology}, 88:357--362.
232 232
\bibitem[Leikola et~al., 2018]{min8100434} 233 233 \bibitem[Leikola et~al., 2018]{min8100434}
Leikola, M., Sauer, C., Rintala, L., Aromaa, J., and Lundström, M. (2018). 234 234 Leikola, M., Sauer, C., Rintala, L., Aromaa, J., and Lundström, M. (2018).
\newblock Assessing the similarity of cyanide-free gold leaching processes: A 235 235 \newblock Assessing the similarity of cyanide-free gold leaching processes: A
case-based reasoning application. 236 236 case-based reasoning application.
\newblock {\em Minerals}, 8(10). 237 237 \newblock {\em Minerals}, 8(10).
238 238
\bibitem[Lepage et~al., 2020]{10.1007/978-3-030-58342-2_20} 239 239 \bibitem[Lepage et~al., 2020]{10.1007/978-3-030-58342-2_20}
Lepage, Y., Lieber, J., Mornard, I., Nauer, E., Romary, J., and Sies, R. 240 240 Lepage, Y., Lieber, J., Mornard, I., Nauer, E., Romary, J., and Sies, R.
(2020). 241 241 (2020).
\newblock The french correction: When retrieval is harder to specify than 242 242 \newblock The french correction: When retrieval is harder to specify than
adaptation. 243 243 adaptation.
\newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning 244 244 \newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning
Research and Development}, pages 309--324, Cham. Springer International 245 245 Research and Development}, pages 309--324, Cham. Springer International
Publishing. 246 246 Publishing.
247 247
\bibitem[Li et~al., 2024]{Li_2024} 248 248 \bibitem[Li et~al., 2024]{Li_2024}
Li, Z., Ding, Z., Yu, Y., and Zhang, P. (2024). 249 249 Li, Z., Ding, Z., Yu, Y., and Zhang, P. (2024).
\newblock The kullback–leibler divergence and the convergence rate of fast 250 250 \newblock The kullback–leibler divergence and the convergence rate of fast
covariance matrix estimators in galaxy clustering analysis. 251 251 covariance matrix estimators in galaxy clustering analysis.
\newblock {\em The Astrophysical Journal}, 965(2):125. 252 252 \newblock {\em The Astrophysical Journal}, 965(2):125.
253 253
\bibitem[Liang et~al., 2021]{10.3389/fgene.2021.600040} 254 254 \bibitem[Liang et~al., 2021]{10.3389/fgene.2021.600040}
Liang, M., Chang, T., An, B., Duan, X., Du, L., Wang, X., Miao, J., Xu, L., 255 255 Liang, M., Chang, T., An, B., Duan, X., Du, L., Wang, X., Miao, J., Xu, L.,
Gao, X., Zhang, L., Li, J., and Gao, H. (2021). 256 256 Gao, X., Zhang, L., Li, J., and Gao, H. (2021).
\newblock A stacking ensemble learning framework for genomic prediction. 257 257 \newblock A stacking ensemble learning framework for genomic prediction.
\newblock {\em Frontiers in Genetics}, 12. 258 258 \newblock {\em Frontiers in Genetics}, 12.
259 259
\bibitem[Lin, 2022]{9870279} 260 260 \bibitem[Lin, 2022]{9870279}
Lin, B. (2022). 261 261 Lin, B. (2022).
\newblock Evolutionary multi-armed bandits with genetic thompson sampling. 262 262 \newblock Evolutionary multi-armed bandits with genetic thompson sampling.
\newblock In {\em 2022 IEEE Congress on Evolutionary Computation (CEC)}, pages 263 263 \newblock In {\em 2022 IEEE Congress on Evolutionary Computation (CEC)}, pages
1--8. 264 264 1--8.
265 265
\bibitem[Liu and Yu, 2023]{Liu2023} 266 266 \bibitem[Liu and Yu, 2023]{Liu2023}
Liu, M. and Yu, D. (2023). 267 267 Liu, M. and Yu, D. (2023).
\newblock Towards intelligent e-learning systems. 268 268 \newblock Towards intelligent e-learning systems.
\newblock {\em Education and Information Technologies}, 28(7):7845--7876. 269 269 \newblock {\em Education and Information Technologies}, 28(7):7845--7876.
270 270
\bibitem[Louvros et~al., 2023]{jmse11050890} 271 271 \bibitem[Louvros et~al., 2023]{jmse11050890}
Louvros, P., Stefanidis, F., Boulougouris, E., Komianos, A., and Vassalos, D. 272 272 Louvros, P., Stefanidis, F., Boulougouris, E., Komianos, A., and Vassalos, D.
(2023). 273 273 (2023).
\newblock Machine learning and case-based reasoning for real-time onboard 274 274 \newblock Machine learning and case-based reasoning for real-time onboard
prediction of the survivability of ships. 275 275 prediction of the survivability of ships.
\newblock {\em Journal of Marine Science and Engineering}, 11(5). 276 276 \newblock {\em Journal of Marine Science and Engineering}, 11(5).
277 277
\bibitem[Maher and Grace, 2017]{10.1007/978-3-319-61030-6_1} 278 278 \bibitem[Maher and Grace, 2017]{10.1007/978-3-319-61030-6_1}
Maher, M.~L. and Grace, K. (2017). 279 279 Maher, M.~L. and Grace, K. (2017).
\newblock Encouraging curiosity in case-based reasoning and recommender 280 280 \newblock Encouraging curiosity in case-based reasoning and recommender
systems. 281 281 systems.
\newblock In Aha, D.~W. and Lieber, J., editors, {\em Case-Based Reasoning 282 282 \newblock In Aha, D.~W. and Lieber, J., editors, {\em Case-Based Reasoning
Research and Development}, pages 3--15, Cham. Springer International 283 283 Research and Development}, pages 3--15, Cham. Springer International
Publishing. 284 284 Publishing.
285 285
\bibitem[Malburg et~al., 2024]{10.1007/978-3-031-63646-2_4} 286 286 \bibitem[Malburg et~al., 2024]{10.1007/978-3-031-63646-2_4}
Malburg, L., Hotz, M., and Bergmann, R. (2024). 287 287 Malburg, L., Hotz, M., and Bergmann, R. (2024).
\newblock Improving complex adaptations in process-oriented case-based 288 288 \newblock Improving complex adaptations in process-oriented case-based
reasoning by applying rule-based adaptation. 289 289 reasoning by applying rule-based adaptation.
\newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D., 290 290 \newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D.,
editors, {\em Case-Based Reasoning Research and Development}, pages 50--66, 291 291 editors, {\em Case-Based Reasoning Research and Development}, pages 50--66,
Cham. Springer Nature Switzerland. 292 292 Cham. Springer Nature Switzerland.
293 293
\bibitem[Mang et~al., 2021]{Liang} 294 294 \bibitem[Mang et~al., 2021]{Liang}
Mang, L., Tianpeng, C., Bingxing, A., Xinghai, D., Lili, D., Xiaoqiao, W., 295 295 Mang, L., Tianpeng, C., Bingxing, A., Xinghai, D., Lili, D., Xiaoqiao, W.,
Jian, M., Lingyang, X., Xue, G., Lupei, Z., Junya, L., and Huijiang, G. 296 296 Jian, M., Lingyang, X., Xue, G., Lupei, Z., Junya, L., and Huijiang, G.
(2021). 297 297 (2021).
\newblock A stacking ensemble learning framework for genomic prediction. 298 298 \newblock A stacking ensemble learning framework for genomic prediction.
\newblock {\em Frontiers in Genetics}. 299 299 \newblock {\em Frontiers in Genetics}.
300 300
\bibitem[Minsker and Strawn, 2024]{doi:10.1137/23M1592420} 301 301 \bibitem[Minsker and Strawn, 2024]{doi:10.1137/23M1592420}
Minsker, S. and Strawn, N. (2024). 302 302 Minsker, S. and Strawn, N. (2024).
\newblock The geometric median and applications to robust mean estimation. 303 303 \newblock The geometric median and applications to robust mean estimation.
\newblock {\em SIAM Journal on Mathematics of Data Science}, 6(2):504--533. 304 304 \newblock {\em SIAM Journal on Mathematics of Data Science}, 6(2):504--533.
305 305
\bibitem[Muangprathub et~al., 2020]{MUANGPRATHUB2020e05227} 306 306 \bibitem[Muangprathub et~al., 2020]{MUANGPRATHUB2020e05227}
Muangprathub, J., Boonjing, V., and Chamnongthai, K. (2020). 307 307 Muangprathub, J., Boonjing, V., and Chamnongthai, K. (2020).
\newblock Learning recommendation with formal concept analysis for intelligent 308 308 \newblock Learning recommendation with formal concept analysis for intelligent
tutoring system. 309 309 tutoring system.
\newblock {\em Heliyon}, 6(10):e05227. 310 310 \newblock {\em Heliyon}, 6(10):e05227.
311 311
\bibitem[Müller and Bergmann, 2015]{Muller} 312 312 \bibitem[Müller and Bergmann, 2015]{Muller}
Müller, G. and Bergmann, R. (2015). 313 313 Müller, G. and Bergmann, R. (2015).
\newblock Cookingcake: A framework for the adaptation of cooking recipes 314 314 \newblock Cookingcake: A framework for the adaptation of cooking recipes
represented as workflows. 315 315 represented as workflows.
\newblock In {\em International Conference on Case-Based Reasoning}. 316 316 \newblock In {\em International Conference on Case-Based Reasoning}.
317 317
\bibitem[Nguyen, 2024]{NGUYEN2024111566} 318 318 \bibitem[Nguyen, 2024]{NGUYEN2024111566}
Nguyen, A. (2024). 319 319 Nguyen, A. (2024).
\newblock Dynamic metaheuristic selection via thompson sampling for online 320 320 \newblock Dynamic metaheuristic selection via thompson sampling for online
optimization. 321 321 optimization.
\newblock {\em Applied Soft Computing}, 158:111566. 322 322 \newblock {\em Applied Soft Computing}, 158:111566.
323 323
\bibitem[Nkambou et~al., 2010]{Nkambou} 324 324 \bibitem[Nkambou et~al., 2010]{Nkambou}
Nkambou, R., Bourdeau, J., and Mizoguchi, R. (2010). 325 325 Nkambou, R., Bourdeau, J., and Mizoguchi, R. (2010).
\newblock {\em Advances in Intelligent Tutoring Systems}. 326 326 \newblock {\em Advances in Intelligent Tutoring Systems}.
\newblock Springer Berlin, Heidelberg, 1 edition. 327 327 \newblock Springer Berlin, Heidelberg, 1 edition.
328 328
\bibitem[Obeid et~al., 2022]{Obeid} 329 329 \bibitem[Obeid et~al., 2022]{Obeid}
Obeid, C., Lahoud, C., Khoury, H.~E., and Champin, P. (2022). 330 330 Obeid, C., Lahoud, C., Khoury, H.~E., and Champin, P. (2022).
\newblock A novel hybrid recommender system approach for student academic 331 331 \newblock A novel hybrid recommender system approach for student academic
advising named cohrs, supported by case-based reasoning and ontology. 332 332 advising named cohrs, supported by case-based reasoning and ontology.
\newblock {\em Computer Science and Information Systems}, 19(2):979–1005. 333 333 \newblock {\em Computer Science and Information Systems}, 19(2):979–1005.
334 334
\bibitem[Onta{\~{n}}{\'o}n et~al., 2015]{10.1007/978-3-319-24586-7_20} 335 335 \bibitem[Onta{\~{n}}{\'o}n et~al., 2015]{10.1007/978-3-319-24586-7_20}
Onta{\~{n}}{\'o}n, S., Plaza, E., and Zhu, J. (2015). 336 336 Onta{\~{n}}{\'o}n, S., Plaza, E., and Zhu, J. (2015).
\newblock Argument-based case revision in cbr for story generation. 337 337 \newblock Argument-based case revision in cbr for story generation.
\newblock In H{\"u}llermeier, E. and Minor, M., editors, {\em Case-Based 338 338 \newblock In H{\"u}llermeier, E. and Minor, M., editors, {\em Case-Based
Reasoning Research and Development}, pages 290--305, Cham. Springer 339 339 Reasoning Research and Development}, pages 290--305, Cham. Springer
International Publishing. 340 340 International Publishing.
341 341
\bibitem[Ou et~al., 2024]{pmlr-v238-ou24a} 342 342 \bibitem[Ou et~al., 2024]{pmlr-v238-ou24a}
Ou, T., Cummings, R., and Avella~Medina, M. (2024). 343 343 Ou, T., Cummings, R., and Avella~Medina, M. (2024).
\newblock Thompson sampling itself is differentially private. 344 344 \newblock Thompson sampling itself is differentially private.
\newblock In Dasgupta, S., Mandt, S., and Li, Y., editors, {\em Proceedings of 345 345 \newblock In Dasgupta, S., Mandt, S., and Li, Y., editors, {\em Proceedings of
The 27th International Conference on Artificial Intelligence and Statistics}, 346 346 The 27th International Conference on Artificial Intelligence and Statistics},
volume 238 of {\em Proceedings of Machine Learning Research}, pages 347 347 volume 238 of {\em Proceedings of Machine Learning Research}, pages
1576--1584. PMLR. 348 348 1576--1584. PMLR.
349 349
\bibitem[Parejas-Llanovarced et~al., 2024]{PAREJASLLANOVARCED2024111469} 350 350 \bibitem[Parejas-Llanovarced et~al., 2024]{PAREJASLLANOVARCED2024111469}
Parejas-Llanovarced, H., Caro-Martínez, M., del Castillo, M. G.~O., and 351 351 Parejas-Llanovarced, H., Caro-Martínez, M., del Castillo, M. G.~O., and
Recio-García, J.~A. (2024). 352 352 Recio-García, J.~A. (2024).
\newblock Case-based selection of explanation methods for neural network image 353 353 \newblock Case-based selection of explanation methods for neural network image
classifiers. 354 354 classifiers.
\newblock {\em Knowledge-Based Systems}, 288:111469. 355 355 \newblock {\em Knowledge-Based Systems}, 288:111469.
356 356
\bibitem[Petrovic et~al., 2016]{PETROVIC201617} 357 357 \bibitem[Petrovic et~al., 2016]{PETROVIC201617}
Petrovic, S., Khussainova, G., and Jagannathan, R. (2016). 358 358 Petrovic, S., Khussainova, G., and Jagannathan, R. (2016).
\newblock Knowledge-light adaptation approaches in case-based reasoning for 359 359 \newblock Knowledge-light adaptation approaches in case-based reasoning for
radiotherapy treatment planning. 360 360 radiotherapy treatment planning.
\newblock {\em Artificial Intelligence in Medicine}, 68:17--28. 361 361 \newblock {\em Artificial Intelligence in Medicine}, 68:17--28.
362 362
\bibitem[Richter and Weber, 2013]{Richter2013} 363 363 \bibitem[Richter and Weber, 2013]{Richter2013}
Richter, M. and Weber, R. (2013). 364 364 Richter, M. and Weber, R. (2013).
\newblock {\em Case-Based Reasoning (A Textbook)}. 365 365 \newblock {\em Case-Based Reasoning (A Textbook)}.
\newblock Springer-Verlag GmbH. 366 366 \newblock Springer-Verlag GmbH.
367 367
\bibitem[Richter, 2009]{RICHTER20093} 368 368 \bibitem[Richter, 2009]{RICHTER20093}
Richter, M.~M. (2009). 369 369 Richter, M.~M. (2009).
\newblock The search for knowledge, contexts, and case-based reasoning. 370 370 \newblock The search for knowledge, contexts, and case-based reasoning.
\newblock {\em Engineering Applications of Artificial Intelligence}, 371 371 \newblock {\em Engineering Applications of Artificial Intelligence},
22(1):3--9. 372 372 22(1):3--9.
373 373
\bibitem[Robertson and Watson, 2014]{Robertson2014ARO} 374 374 \bibitem[Robertson and Watson, 2014]{Robertson2014ARO}
Robertson, G. and Watson, I.~D. (2014). 375 375 Robertson, G. and Watson, I.~D. (2014).
\newblock A review of real-time strategy game ai. 376 376 \newblock A review of real-time strategy game ai.
\newblock {\em AI Mag.}, 35:75--104. 377 377 \newblock {\em AI Mag.}, 35:75--104.
378 378
\bibitem[{Roldan Reyes} et~al., 2015]{ROLDANREYES20151} 379 379 \bibitem[{Roldan Reyes} et~al., 2015]{ROLDANREYES20151}
{Roldan Reyes}, E., Negny, S., {Cortes Robles}, G., and {Le Lann}, J. (2015). 380 380 {Roldan Reyes}, E., Negny, S., {Cortes Robles}, G., and {Le Lann}, J. (2015).
\newblock Improvement of online adaptation knowledge acquisition and reuse in 381 381 \newblock Improvement of online adaptation knowledge acquisition and reuse in
case-based reasoning: Application to process engineering design. 382 382 case-based reasoning: Application to process engineering design.
\newblock {\em Engineering Applications of Artificial Intelligence}, 41:1--16. 383 383 \newblock {\em Engineering Applications of Artificial Intelligence}, 41:1--16.
384 384
\bibitem[Sadeghi~Moghadam et~al., 2024]{Sadeghi} 385 385 \bibitem[Sadeghi~Moghadam et~al., 2024]{Sadeghi}
Sadeghi~Moghadam, M.~R., Jafarnejad, A., Heidary~Dahooie, J., and 386 386 Sadeghi~Moghadam, M.~R., Jafarnejad, A., Heidary~Dahooie, J., and
Ghasemian~Sahebi, I. (2024). 387 387 Ghasemian~Sahebi, I. (2024).
\newblock A hidden markov model based extended case-based reasoning algorithm 388 388 \newblock A hidden markov model based extended case-based reasoning algorithm
for relief materials demand forecasting. 389 389 for relief materials demand forecasting.
\newblock {\em Mathematics Interdisciplinary Research}, 9(1):89--109. 390 390 \newblock {\em Mathematics Interdisciplinary Research}, 9(1):89--109.
391 391
\bibitem[Schank and Abelson, 1977]{schank+abelson77} 392 392 \bibitem[Schank and Abelson, 1977]{schank+abelson77}
Schank, R.~C. and Abelson, R.~P. (1977). 393 393 Schank, R.~C. and Abelson, R.~P. (1977).
\newblock {\em Scripts, Plans, Goals and Understanding: an Inquiry into Human 394 394 \newblock {\em Scripts, Plans, Goals and Understanding: an Inquiry into Human
Knowledge Structures}. 395 395 Knowledge Structures}.
\newblock L. Erlbaum, Hillsdale, NJ. 396 396 \newblock L. Erlbaum, Hillsdale, NJ.
397 397
\bibitem[Seznec et~al., 2020]{pmlr-v108-seznec20a} 398 398 \bibitem[Seznec et~al., 2020]{pmlr-v108-seznec20a}
Seznec, J., Menard, P., Lazaric, A., and Valko, M. (2020). 399 399 Seznec, J., Menard, P., Lazaric, A., and Valko, M. (2020).
\newblock A single algorithm for both restless and rested rotting bandits. 400 400 \newblock A single algorithm for both restless and rested rotting bandits.
\newblock In Chiappa, S. and Calandra, R., editors, {\em Proceedings of the 401 401 \newblock In Chiappa, S. and Calandra, R., editors, {\em Proceedings of the
Twenty Third International Conference on Artificial Intelligence and 402 402 Twenty Third International Conference on Artificial Intelligence and
Statistics}, volume 108 of {\em Proceedings of Machine Learning Research}, 403 403 Statistics}, volume 108 of {\em Proceedings of Machine Learning Research},
pages 3784--3794. PMLR. 404 404 pages 3784--3794. PMLR.
405 405
\bibitem[Sinaga and Yang, 2020]{9072123} 406 406 \bibitem[Sinaga and Yang, 2020]{9072123}
Sinaga, K.~P. and Yang, M.-S. (2020). 407 407 Sinaga, K.~P. and Yang, M.-S. (2020).
\newblock Unsupervised k-means clustering algorithm. 408 408 \newblock Unsupervised k-means clustering algorithm.
\newblock {\em IEEE Access}, 8:80716--80727. 409 409 \newblock {\em IEEE Access}, 8:80716--80727.
410 410
\bibitem[Skittou et~al., 2024]{skittou2024recommender} 411 411 \bibitem[Skittou et~al., 2024]{skittou2024recommender}
Skittou, M., Merrouchi, M., and Gadi, T. (2024). 412 412 Skittou, M., Merrouchi, M., and Gadi, T. (2024).
\newblock A recommender system for educational planning. 413 413 \newblock A recommender system for educational planning.
\newblock {\em Cybernetics and Information Technologies}, 24(2):67--85. 414 414 \newblock {\em Cybernetics and Information Technologies}, 24(2):67--85.
415 415
\bibitem[Smyth and Cunningham, 2018]{10.1007/978-3-030-01081-2_25} 416 416 \bibitem[Smyth and Cunningham, 2018]{10.1007/978-3-030-01081-2_25}
Smyth, B. and Cunningham, P. (2018). 417 417 Smyth, B. and Cunningham, P. (2018).
\newblock An analysis of case representations for marathon race prediction and 418 418 \newblock An analysis of case representations for marathon race prediction and
planning. 419 419 planning.
\newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based 420 420 \newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based
Reasoning Research and Development}, pages 369--384, Cham. Springer 421 421 Reasoning Research and Development}, pages 369--384, Cham. Springer
International Publishing. 422 422 International Publishing.
423 423
\bibitem[Smyth and Willemsen, 2020]{10.1007/978-3-030-58342-2_8} 424 424 \bibitem[Smyth and Willemsen, 2020]{10.1007/978-3-030-58342-2_8}
Smyth, B. and Willemsen, M.~C. (2020). 425 425 Smyth, B. and Willemsen, M.~C. (2020).
\newblock Predicting the personal-best times of speed skaters using case-based 426 426 \newblock Predicting the personal-best times of speed skaters using case-based
reasoning. 427 427 reasoning.
\newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning 428 428 \newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning
Research and Development}, pages 112--126, Cham. Springer International 429 429 Research and Development}, pages 112--126, Cham. Springer International
Publishing. 430 430 Publishing.
431 431
\bibitem[Soto-Forero et~al., 2024a]{Soto2} 432 432 \bibitem[Soto-Forero et~al., 2024a]{Soto2}
Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024a). 433 433 Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024a).
\newblock Automatic real-time adaptation of training session difficulty using 434 434 \newblock Automatic real-time adaptation of training session difficulty using
rules and reinforcement learning in the ai-vt its. 435 435 rules and reinforcement learning in the ai-vt its.
\newblock {\em International Journal of Modern Education and Computer 436 436 \newblock {\em International Journal of Modern Education and Computer
Science(IJMECS)}, 16:56--71. 437 437 Science(IJMECS)}, 16:56--71.
438 438
\bibitem[Soto-Forero et~al., 2024b]{10.1007/978-3-031-63646-2_13} 439 439 \bibitem[Soto-Forero et~al., 2024b]{10.1007/978-3-031-63646-2_13}
Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024b). 440 440 Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024b).
\newblock The intelligent tutoring system ai-vt with case-based reasoning and 441 441 \newblock The intelligent tutoring system ai-vt with case-based reasoning and
real time recommender models. 442 442 real time recommender models.
\newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D., 443 443 \newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D.,
editors, {\em Case-Based Reasoning Research and Development}, pages 191--205, 444 444 editors, {\em Case-Based Reasoning Research and Development}, pages 191--205,
Cham. Springer Nature Switzerland. 445 445 Cham. Springer Nature Switzerland.
446 446
\bibitem[Soto-Forero et~al., 2024c]{10.1007/978-3-031-63646-2_11} 447 447 \bibitem[Soto-Forero et~al., 2024c]{10.1007/978-3-031-63646-2_11}
Soto-Forero, D., Betbeder, M.-L., and Henriet, J. (2024c). 448 448 Soto-Forero, D., Betbeder, M.-L., and Henriet, J. (2024c).
\newblock Ensemble stacking case-based reasoning for regression. 449 449 \newblock Ensemble stacking case-based reasoning for regression.
\newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D., 450 450 \newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D.,
editors, {\em Case-Based Reasoning Research and Development}, pages 159--174, 451 451 editors, {\em Case-Based Reasoning Research and Development}, pages 159--174,
Cham. Springer Nature Switzerland. 452 452 Cham. Springer Nature Switzerland.
453 453
\bibitem[Su et~al., 2022]{SU2022109547} 454 454 \bibitem[Su et~al., 2022]{SU2022109547}
Su, Y., Cheng, Z., Wu, J., Dong, Y., Huang, Z., Wu, L., Chen, E., Wang, S., and 455 455 Su, Y., Cheng, Z., Wu, J., Dong, Y., Huang, Z., Wu, L., Chen, E., Wang, S., and
Xie, F. (2022). 456 456 Xie, F. (2022).
\newblock Graph-based cognitive diagnosis for intelligent tutoring systems. 457 457 \newblock Graph-based cognitive diagnosis for intelligent tutoring systems.
\newblock {\em Knowledge-Based Systems}, 253:109547. 458 458 \newblock {\em Knowledge-Based Systems}, 253:109547.
459 459
\bibitem[Supic, 2018]{8495930} 460 460 \bibitem[Supic, 2018]{8495930}
Supic, H. (2018). 461 461 Supic, H. (2018).
\newblock Case-based reasoning model for personalized learning path 462 462 \newblock Case-based reasoning model for personalized learning path
recommendation in example-based learning activities. 463 463 recommendation in example-based learning activities.
\newblock In {\em 2018 IEEE 27th International Conference on Enabling 464 464 \newblock In {\em 2018 IEEE 27th International Conference on Enabling
Technologies: Infrastructure for Collaborative Enterprises (WETICE)}, pages 465 465 Technologies: Infrastructure for Collaborative Enterprises (WETICE)}, pages
175--178. 466 466 175--178.
467 467
@article{ZHANG2021100025, 1 1 @article{ZHANG2021100025,
title = {AI technologies for education: Recent research and future directions}, 2 2 title = {AI technologies for education: Recent research and future directions},
journal = {Computers and Education: Artificial Intelligence}, 3 3 journal = {Computers and Education: Artificial Intelligence},
volume = {2}, 4 4 volume = {2},
pages = {100025}, 5 5 pages = {100025},
language = {English}, 6 6 language = {English},
year = {2021}, 7 7 year = {2021},
issn = {2666-920X}, 8 8 issn = {2666-920X},
type = {article}, 9 9 type = {article},
doi = {https://doi.org/10.1016/j.caeai.2021.100025}, 10 10 doi = {https://doi.org/10.1016/j.caeai.2021.100025},
url = {https://www.sciencedirect.com/science/article/pii/S2666920X21000199}, 11 11 url = {https://www.sciencedirect.com/science/article/pii/S2666920X21000199},
author = {Ke Zhang. and Ayse Begum Aslan}, 12 12 author = {Ke Zhang. and Ayse Begum Aslan},
address={USA}, 13 13 address={USA},
affiliation={Wayne State University; Eastern Michigan University}, 14 14 affiliation={Wayne State University; Eastern Michigan University},
keywords = {Artificial intelligence, AI, AI in Education}, 15 15 keywords = {Artificial intelligence, AI, AI in Education},
abstract = {From unique educational perspectives, this article reports a comprehensive review of selected empirical studies on artificial intelligence in education (AIEd) published in 1993–2020, as collected in the Web of Sciences database and selected AIEd-specialized journals. A total of 40 empirical studies met all selection criteria, and were fully reviewed using multiple methods, including selected bibliometrics, content analysis and categorical meta-trends analysis. This article reports the current state of AIEd research, highlights selected AIEd technologies and applications, reviews their proven and potential benefits for education, bridges the gaps between AI technological innovations and their educational applications, and generates practical examples and inspirations for both technological experts that create AIEd technologies and educators who spearhead AI innovations in education. It also provides rich discussions on practical implications and future research directions from multiple perspectives. The advancement of AIEd calls for critical initiatives to address AI ethics and privacy concerns, and requires interdisciplinary and transdisciplinary collaborations in large-scaled, longitudinal research and development efforts.} 16 16 abstract = {From unique educational perspectives, this article reports a comprehensive review of selected empirical studies on artificial intelligence in education (AIEd) published in 1993–2020, as collected in the Web of Sciences database and selected AIEd-specialized journals. A total of 40 empirical studies met all selection criteria, and were fully reviewed using multiple methods, including selected bibliometrics, content analysis and categorical meta-trends analysis. This article reports the current state of AIEd research, highlights selected AIEd technologies and applications, reviews their proven and potential benefits for education, bridges the gaps between AI technological innovations and their educational applications, and generates practical examples and inspirations for both technological experts that create AIEd technologies and educators who spearhead AI innovations in education. It also provides rich discussions on practical implications and future research directions from multiple perspectives. The advancement of AIEd calls for critical initiatives to address AI ethics and privacy concerns, and requires interdisciplinary and transdisciplinary collaborations in large-scaled, longitudinal research and development efforts.}
} 17 17 }
18 18
@article{PETROVIC201617, 19 19 @article{PETROVIC201617,
title = {Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning}, 20 20 title = {Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning},
journal = {Artificial Intelligence in Medicine}, 21 21 journal = {Artificial Intelligence in Medicine},
volume = {68}, 22 22 volume = {68},
pages = {17-28}, 23 23 pages = {17-28},
year = {2016}, 24 24 year = {2016},
language = {English}, 25 25 language = {English},
issn = {0933-3657}, 26 26 issn = {0933-3657},
type = {article}, 27 27 type = {article},
doi = {https://doi.org/10.1016/j.artmed.2016.01.006}, 28 28 doi = {https://doi.org/10.1016/j.artmed.2016.01.006},
url = {https://www.sciencedirect.com/science/article/pii/S093336571630015X}, 29 29 url = {https://www.sciencedirect.com/science/article/pii/S093336571630015X},
author = {Sanja Petrovic and Gulmira Khussainova and Rupa Jagannathan}, 30 30 author = {Sanja Petrovic and Gulmira Khussainova and Rupa Jagannathan},
affiliation={Nottingham University}, 31 31 affiliation={Nottingham University},
address={UK}, 32 32 address={UK},
keywords = {Case-based reasoning, Adaptation-guided retrieval, Machine-learning tools, Radiotherapy treatment planning}, 33 33 keywords = {Case-based reasoning, Adaptation-guided retrieval, Machine-learning tools, Radiotherapy treatment planning},
abstract = {Objective 34 34 abstract = {Objective
Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation. 35 35 Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation.
Methodology 36 36 Methodology
We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case. 37 37 We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case.
Results 38 38 Results
The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed. 39 39 The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed.
Conclusions 40 40 Conclusions
The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base.} 41 41 The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base.}
} 42 42 }
43 43
@article{ROLDANREYES20151, 44 44 @article{ROLDANREYES20151,
title = {Improvement of online adaptation knowledge acquisition and reuse in case-based reasoning: Application to process engineering design}, 45 45 title = {Improvement of online adaptation knowledge acquisition and reuse in case-based reasoning: Application to process engineering design},
journal = {Engineering Applications of Artificial Intelligence}, 46 46 journal = {Engineering Applications of Artificial Intelligence},
volume = {41}, 47 47 volume = {41},
pages = {1-16}, 48 48 pages = {1-16},
affiliation={Université de Toulouse; Instituto Tecnologico de Orizaba}, 49 49 affiliation={Université de Toulouse; Instituto Tecnologico de Orizaba},
country={France}, 50 50 country={France},
language = {English}, 51 51 language = {English},
year = {2015}, 52 52 year = {2015},
type = {article}, 53 53 type = {article},
issn = {0952-1976}, 54 54 issn = {0952-1976},
doi = {https://doi.org/10.1016/j.engappai.2015.01.015}, 55 55 doi = {https://doi.org/10.1016/j.engappai.2015.01.015},
url = {https://www.sciencedirect.com/science/article/pii/S0952197615000263}, 56 56 url = {https://www.sciencedirect.com/science/article/pii/S0952197615000263},
author = {E. {Roldan Reyes} and S. Negny and G. {Cortes Robles} and J.M. {Le Lann}}, 57 57 author = {E. {Roldan Reyes} and S. Negny and G. {Cortes Robles} and J.M. {Le Lann}},
keywords = {Case based reasoning, Constraint satisfaction problems, Interactive adaptation method, Online knowledge acquisition, Failure diagnosis and repair}, 58 58 keywords = {Case based reasoning, Constraint satisfaction problems, Interactive adaptation method, Online knowledge acquisition, Failure diagnosis and repair},
abstract = {Despite various publications in the area during the last few years, the adaptation step is still a crucial phase for a relevant and reasonable Case Based Reasoning system. Furthermore, the online acquisition of the new adaptation knowledge is of particular interest as it enables the progressive improvement of the system while reducing the knowledge engineering effort without constraints for the expert. Therefore this paper presents a new interactive method for adaptation knowledge elicitation, acquisition and reuse, thanks to a modification of the traditional CBR cycle. Moreover to improve adaptation knowledge reuse, a test procedure is also implemented to help the user in the adaptation step and its diagnosis during adaptation failure. A study on the quality and usefulness of the new knowledge acquired is also driven. As our Knowledge Based Systems (KBS) is more focused on preliminary design, and more particularly in the field of process engineering, we need to unify in the same method two types of knowledge: contextual and general. To realize this, this article proposes the integration of the Constraint Satisfaction Problem (based on general knowledge) approach into the Case Based Reasoning (based on contextual knowledge) process to improve the case representation and the adaptation of past experiences. To highlight its capability, the proposed approach is illustrated through a case study dedicated to the design of an industrial mixing device.} 59 59 abstract = {Despite various publications in the area during the last few years, the adaptation step is still a crucial phase for a relevant and reasonable Case Based Reasoning system. Furthermore, the online acquisition of the new adaptation knowledge is of particular interest as it enables the progressive improvement of the system while reducing the knowledge engineering effort without constraints for the expert. Therefore this paper presents a new interactive method for adaptation knowledge elicitation, acquisition and reuse, thanks to a modification of the traditional CBR cycle. Moreover to improve adaptation knowledge reuse, a test procedure is also implemented to help the user in the adaptation step and its diagnosis during adaptation failure. A study on the quality and usefulness of the new knowledge acquired is also driven. As our Knowledge Based Systems (KBS) is more focused on preliminary design, and more particularly in the field of process engineering, we need to unify in the same method two types of knowledge: contextual and general. To realize this, this article proposes the integration of the Constraint Satisfaction Problem (based on general knowledge) approach into the Case Based Reasoning (based on contextual knowledge) process to improve the case representation and the adaptation of past experiences. To highlight its capability, the proposed approach is illustrated through a case study dedicated to the design of an industrial mixing device.}
} 60 60 }
61 61
@article{JUNG20095695, 62 62 @article{JUNG20095695,
title = {Integrating radial basis function networks with case-based reasoning for product design}, 63 63 title = {Integrating radial basis function networks with case-based reasoning for product design},
journal = {Expert Systems with Applications}, 64 64 journal = {Expert Systems with Applications},
volume = {36}, 65 65 volume = {36},
number = {3, Part 1}, 66 66 number = {3, Part 1},
language = {English}, 67 67 language = {English},
pages = {5695-5701}, 68 68 pages = {5695-5701},
year = {2009}, 69 69 year = {2009},
type = {article}, 70 70 type = {article},
issn = {0957-4174}, 71 71 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2008.06.099}, 72 72 doi = {https://doi.org/10.1016/j.eswa.2008.06.099},
url = {https://www.sciencedirect.com/science/article/pii/S0957417408003667}, 73 73 url = {https://www.sciencedirect.com/science/article/pii/S0957417408003667},
author = {Sabum Jung and Taesoo Lim and Dongsoo Kim}, 74 74 author = {Sabum Jung and Taesoo Lim and Dongsoo Kim},
affiliation={LG Production Engineering Research Institute; Sungkyul University; Soongsil University}, 75 75 affiliation={LG Production Engineering Research Institute; Sungkyul University; Soongsil University},
keywords = {Case-based reasoning (CBR), Radial basis function network (RBFN), Design expert system, Product design}, 76 76 keywords = {Case-based reasoning (CBR), Radial basis function network (RBFN), Design expert system, Product design},
abstract = {This paper presents a case-based design expert system that automatically determines the design values of a product. We focus on the design problem of a shadow mask which is a core component of monitors in the electronics industry. In case-based reasoning (CBR), it is important to retrieve similar cases and adapt them to meet design specifications exactly. Notably, difficulties in automating the adaptation process have prevented designers from being able to use design expert systems easily and efficiently. In this paper, we present a hybrid approach combining CBR and artificial neural networks in order to solve the problems occurring during the adaptation process. We first constructed a radial basis function network (RBFN) composed of representative cases created by K-means clustering. Then, the representative case most similar to the current problem was adjusted using the network. The rationale behind the proposed approach is discussed, and experimental results acquired from real shadow mask design are presented. Using the design expert system, designers can reduce design time and errors and enhance the total quality of design. Furthermore, the expert system facilitates effective sharing of design knowledge among designers.} 77 77 abstract = {This paper presents a case-based design expert system that automatically determines the design values of a product. We focus on the design problem of a shadow mask which is a core component of monitors in the electronics industry. In case-based reasoning (CBR), it is important to retrieve similar cases and adapt them to meet design specifications exactly. Notably, difficulties in automating the adaptation process have prevented designers from being able to use design expert systems easily and efficiently. In this paper, we present a hybrid approach combining CBR and artificial neural networks in order to solve the problems occurring during the adaptation process. We first constructed a radial basis function network (RBFN) composed of representative cases created by K-means clustering. Then, the representative case most similar to the current problem was adjusted using the network. The rationale behind the proposed approach is discussed, and experimental results acquired from real shadow mask design are presented. Using the design expert system, designers can reduce design time and errors and enhance the total quality of design. Furthermore, the expert system facilitates effective sharing of design knowledge among designers.}
} 78 78 }
79 79
@article{CHIU2023100118, 80 80 @article{CHIU2023100118,
title = {Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education}, 81 81 title = {Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education},
journal = {Computers and Education: Artificial Intelligence}, 82 82 journal = {Computers and Education: Artificial Intelligence},
volume = {4}, 83 83 volume = {4},
language = {English}, 84 84 language = {English},
type = {article}, 85 85 type = {article},
pages = {100118}, 86 86 pages = {100118},
year = {2023}, 87 87 year = {2023},
issn = {2666-920X}, 88 88 issn = {2666-920X},
doi = {https://doi.org/10.1016/j.caeai.2022.100118}, 89 89 doi = {https://doi.org/10.1016/j.caeai.2022.100118},
url = {https://www.sciencedirect.com/science/article/pii/S2666920X2200073X}, 90 90 url = {https://www.sciencedirect.com/science/article/pii/S2666920X2200073X},
author = {Thomas K.F. Chiu and Qi Xia and Xinyan Zhou and Ching Sing Chai and Miaoting Cheng}, 91 91 author = {Thomas K.F. Chiu and Qi Xia and Xinyan Zhou and Ching Sing Chai and Miaoting Cheng},
keywords = {Artificial intelligence, Artificial intelligence in education, Systematic review, Learning, Teaching, Assessment}, 92 92 keywords = {Artificial intelligence, Artificial intelligence in education, Systematic review, Learning, Teaching, Assessment},
abstract = {Applications of artificial intelligence in education (AIEd) are emerging and are new to researchers and practitioners alike. Reviews of the relevant literature have not examined how AI technologies have been integrated into each of the four key educational domains of learning, teaching, assessment, and administration. The relationships between the technologies and learning outcomes for students and teachers have also been neglected. This systematic review study aims to understand the opportunities and challenges of AIEd by examining the literature from the last 10 years (2012–2021) using matrix coding and content analysis approaches. The results present the current focus of AIEd research by identifying 13 roles of AI technologies in the key educational domains, 7 learning outcomes of AIEd, and 10 major challenges. The review also provides suggestions for future directions of AIEd research.} 93 93 abstract = {Applications of artificial intelligence in education (AIEd) are emerging and are new to researchers and practitioners alike. Reviews of the relevant literature have not examined how AI technologies have been integrated into each of the four key educational domains of learning, teaching, assessment, and administration. The relationships between the technologies and learning outcomes for students and teachers have also been neglected. This systematic review study aims to understand the opportunities and challenges of AIEd by examining the literature from the last 10 years (2012–2021) using matrix coding and content analysis approaches. The results present the current focus of AIEd research by identifying 13 roles of AI technologies in the key educational domains, 7 learning outcomes of AIEd, and 10 major challenges. The review also provides suggestions for future directions of AIEd research.}
} 94 94 }
95 95
@article{Robertson2014ARO, 96 96 @article{Robertson2014ARO,
title = {A Review of Real-Time Strategy Game AI}, 97 97 title = {A Review of Real-Time Strategy Game AI},
author = {Glen Robertson and Ian D. Watson}, 98 98 author = {Glen Robertson and Ian D. Watson},
affiliation = {University of Auckland }, 99 99 affiliation = {University of Auckland },
keywords = {Game, IA, Real-time strategy}, 100 100 keywords = {Game, IA, Real-time strategy},
type={article}, 101 101 type={article},
language={English}, 102 102 language={English},
abstract = {This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies.}, 103 103 abstract = {This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies.},
journal = {AI Mag.}, 104 104 journal = {AI Mag.},
year = {2014}, 105 105 year = {2014},
volume = {35}, 106 106 volume = {35},
pages = {75-104} 107 107 pages = {75-104}
} 108 108 }
109 109
@Inproceedings{10.1007/978-3-642-15973-2_50, 110 110 @Inproceedings{10.1007/978-3-642-15973-2_50,
author={Butdee, S. 111 111 author={Butdee, S.
and Tichkiewitch, S.}, 112 112 and Tichkiewitch, S.},
affiliation={University of Technology North Bangkok; Grenoble Institute of Technology}, 113 113 affiliation={University of Technology North Bangkok; Grenoble Institute of Technology},
editor={Bernard, Alain}, 114 114 editor={Bernard, Alain},
title={Case-Based Reasoning for Adaptive Aluminum Extrusion Die Design Together with Parameters by Neural Networks}, 115 115 title={Case-Based Reasoning for Adaptive Aluminum Extrusion Die Design Together with Parameters by Neural Networks},
keywords={Adaptive die design and parameters, Optimal aluminum extrusion, Case-based reasoning, Neural networks}, 116 116 keywords={Adaptive die design and parameters, Optimal aluminum extrusion, Case-based reasoning, Neural networks},
booktitle={Global Product Development}, 117 117 booktitle={Global Product Development},
year={2011}, 118 118 year={2011},
type = {article; proceedings paper}, 119 119 type = {article; proceedings paper},
language = {English}, 120 120 language = {English},
publisher = {Springer Berlin Heidelberg}, 121 121 publisher = {Springer Berlin Heidelberg},
address = {Berlin, Heidelberg}, 122 122 address = {Berlin, Heidelberg},
pages = {491--496}, 123 123 pages = {491--496},
abstract = {Nowadays Aluminum extrusion die design is a critical task for improving productivity which involves with quality, time and cost. Case-Based Reasoning (CBR) method has been successfully applied to support the die design process in order to design a new die by tackling previous problems together with their solutions to match with a new similar problem. Such solutions are selected and modified to solve the present problem. However, the applications of the CBR are useful only retrieving previous features whereas the critical parameters are missing. In additions, the experience learning to such parameters are limited. This chapter proposes Artificial Neural Network (ANN) to associate the CBR in order to learning previous parameters and predict to the new die design according to the primitive die modification. The most satisfactory is to accommodate the optimal parameters of extrusion processes.}, 124 124 abstract = {Nowadays Aluminum extrusion die design is a critical task for improving productivity which involves with quality, time and cost. Case-Based Reasoning (CBR) method has been successfully applied to support the die design process in order to design a new die by tackling previous problems together with their solutions to match with a new similar problem. Such solutions are selected and modified to solve the present problem. However, the applications of the CBR are useful only retrieving previous features whereas the critical parameters are missing. In additions, the experience learning to such parameters are limited. This chapter proposes Artificial Neural Network (ANN) to associate the CBR in order to learning previous parameters and predict to the new die design according to the primitive die modification. The most satisfactory is to accommodate the optimal parameters of extrusion processes.},
isbn = {978-3-642-15973-2} 125 125 isbn = {978-3-642-15973-2}
} 126 126 }
127 127
@Inproceedings{10.1007/978-3-319-47096-2_11, 128 128 @Inproceedings{10.1007/978-3-319-47096-2_11,
author={Grace, Kazjon 129 129 author={Grace, Kazjon
and Maher, Mary Lou 130 130 and Maher, Mary Lou
and Wilson, David C. 131 131 and Wilson, David C.
and Najjar, Nadia A.}, 132 132 and Najjar, Nadia A.},
affiliation={University of North Carolina at Charlotte}, 133 133 affiliation={University of North Carolina at Charlotte},
editor={Goel, Ashok 134 134 editor={Goel, Ashok
and D{\'i}az-Agudo, M Bel{\'e}n 135 135 and D{\'i}az-Agudo, M Bel{\'e}n
and Roth-Berghofer, Thomas}, 136 136 and Roth-Berghofer, Thomas},
title={Combining CBR and Deep Learning to Generate Surprising Recipe Designs}, 137 137 title={Combining CBR and Deep Learning to Generate Surprising Recipe Designs},
keywords={Case-based reasoning, deep learning, recipe design}, 138 138 keywords={Case-based reasoning, deep learning, recipe design},
type = {article; proceedings paper}, 139 139 type = {article; proceedings paper},
booktitle={Case-Based Reasoning Research and Development}, 140 140 booktitle={Case-Based Reasoning Research and Development},
year={2016}, 141 141 year={2016},
publisher={Springer International Publishing}, 142 142 publisher={Springer International Publishing},
address={Cham}, 143 143 address={Cham},
language = {English}, 144 144 language = {English},
pages={154--169}, 145 145 pages={154--169},
abstract={This paper presents a dual-cycle CBR model in the domain of recipe generation. The model combines the strengths of deep learning and similarity-based retrieval to generate recipes that are novel and valuable (i.e. they are creative). The first cycle generates abstract descriptions which we call ``design concepts'' by synthesizing expectations from the entire case base, while the second cycle uses those concepts to retrieve and adapt objects. We define these conceptual object representations as an abstraction over complete cases on which expectations can be formed, allowing objects to be evaluated for surprisingness (the peak level of unexpectedness in the object, given the case base) and plausibility (the overall similarity of the object to those in the case base). The paper presents a prototype implementation of the model, and demonstrates its ability to generate objects that are simultaneously plausible and surprising, in addition to fitting a user query. This prototype is then compared to a traditional single-cycle CBR system.}, 146 146 abstract={This paper presents a dual-cycle CBR model in the domain of recipe generation. The model combines the strengths of deep learning and similarity-based retrieval to generate recipes that are novel and valuable (i.e. they are creative). The first cycle generates abstract descriptions which we call ``design concepts'' by synthesizing expectations from the entire case base, while the second cycle uses those concepts to retrieve and adapt objects. We define these conceptual object representations as an abstraction over complete cases on which expectations can be formed, allowing objects to be evaluated for surprisingness (the peak level of unexpectedness in the object, given the case base) and plausibility (the overall similarity of the object to those in the case base). The paper presents a prototype implementation of the model, and demonstrates its ability to generate objects that are simultaneously plausible and surprising, in addition to fitting a user query. This prototype is then compared to a traditional single-cycle CBR system.},
isbn={978-3-319-47096-2} 147 147 isbn={978-3-319-47096-2}
} 148 148 }
149 149
@Inproceedings{10.1007/978-3-319-61030-6_1, 150 150 @Inproceedings{10.1007/978-3-319-61030-6_1,
author={Maher, Mary Lou 151 151 author={Maher, Mary Lou
and Grace, Kazjon}, 152 152 and Grace, Kazjon},
editor={Aha, David W. 153 153 editor={Aha, David W.
and Lieber, Jean}, 154 154 and Lieber, Jean},
affiliation={University of North Carolina at Charlotte}, 155 155 affiliation={University of North Carolina at Charlotte},
title={Encouraging Curiosity in Case-Based Reasoning and Recommender Systems}, 156 156 title={Encouraging Curiosity in Case-Based Reasoning and Recommender Systems},
keywords={Curiosity, Case-based reasoning, Recommender systems}, 157 157 keywords={Curiosity, Case-based reasoning, Recommender systems},
booktitle={Case-Based Reasoning Research and Development}, 158 158 booktitle={Case-Based Reasoning Research and Development},
year={2017}, 159 159 year={2017},
publisher={Springer International Publishing}, 160 160 publisher={Springer International Publishing},
address={Cham}, 161 161 address={Cham},
pages={3--15}, 162 162 pages={3--15},
language = {English}, 163 163 language = {English},
type = {article; proceedings paper}, 164 164 type = {article; proceedings paper},
abstract={A key benefit of case-based reasoning (CBR) and recommender systems is the use of past experience to guide the synthesis or selection of the best solution for a specific context or user. Typically, the solution presented to the user is based on a value system that privileges the closest match in a query and the solution that performs best when evaluated according to predefined requirements. In domains in which creativity is desirable or the user is engaged in a learning activity, there is a benefit to moving beyond the expected or ``best match'' and include results based on computational models of novelty and surprise. In this paper, models of novelty and surprise are integrated with both CBR and Recommender Systems to encourage user curiosity.}, 165 165 abstract={A key benefit of case-based reasoning (CBR) and recommender systems is the use of past experience to guide the synthesis or selection of the best solution for a specific context or user. Typically, the solution presented to the user is based on a value system that privileges the closest match in a query and the solution that performs best when evaluated according to predefined requirements. In domains in which creativity is desirable or the user is engaged in a learning activity, there is a benefit to moving beyond the expected or ``best match'' and include results based on computational models of novelty and surprise. In this paper, models of novelty and surprise are integrated with both CBR and Recommender Systems to encourage user curiosity.},
isbn={978-3-319-61030-6} 166 166 isbn={978-3-319-61030-6}
} 167 167 }
168 168
@Inproceedings{Muller, 169 169 @Inproceedings{Muller,
author = {Müller, G. and Bergmann, R.}, 170 170 author = {Müller, G. and Bergmann, R.},
affiliation={University of Trier}, 171 171 affiliation={University of Trier},
year = {2015}, 172 172 year = {2015},
month = {01}, 173 173 month = {01},
language = {English}, 174 174 language = {English},
type = {article; proceedings paper}, 175 175 type = {article; proceedings paper},
abstract = {This paper presents CookingCAKE,a framework for the adaptation of cooking recipes represented as workflows. CookingCAKE integrates and combines several workflow adaptation approaches applied in process-oriented case based reasoning (POCBR) in a single adaptation framework, thus providing a capable tool for the adaptation of cooking recipes. The available case base of cooking workflows is analyzed to generate adaptation knowledge which is used to adapt a recipe regarding restrictions and resources, which the user may define for the preparation of a dish.}, 176 176 abstract = {This paper presents CookingCAKE,a framework for the adaptation of cooking recipes represented as workflows. CookingCAKE integrates and combines several workflow adaptation approaches applied in process-oriented case based reasoning (POCBR) in a single adaptation framework, thus providing a capable tool for the adaptation of cooking recipes. The available case base of cooking workflows is analyzed to generate adaptation knowledge which is used to adapt a recipe regarding restrictions and resources, which the user may define for the preparation of a dish.},
booktitle = {International Conference on Case-Based Reasoning}, 177 177 booktitle = {International Conference on Case-Based Reasoning},
title = {CookingCAKE: A Framework for the adaptation of cooking recipes represented as workflows}, 178 178 title = {CookingCAKE: A Framework for the adaptation of cooking recipes represented as workflows},
keywords={recipe adaptation, workflow adaptation, workflows, process-oriented, case based reasoning} 179 179 keywords={recipe adaptation, workflow adaptation, workflows, process-oriented, case based reasoning}
} 180 180 }
181 181
@Inproceedings{10.1007/978-3-319-24586-7_20, 182 182 @Inproceedings{10.1007/978-3-319-24586-7_20,
author={Onta{\~{n}}{\'o}n, S. 183 183 author={Onta{\~{n}}{\'o}n, S.
and Plaza, E. 184 184 and Plaza, E.
and Zhu, J.}, 185 185 and Zhu, J.},
editor={H{\"u}llermeier, Eyke 186 186 editor={H{\"u}llermeier, Eyke
and Minor, Mirjam}, 187 187 and Minor, Mirjam},
affiliation={Drexel University; Artificial Intelligence Research Institute CSIC}, 188 188 affiliation={Drexel University; Artificial Intelligence Research Institute CSIC},
title={Argument-Based Case Revision in CBR for Story Generation}, 189 189 title={Argument-Based Case Revision in CBR for Story Generation},
keywords={CBR, Case-based reasoning, Story generation}, 190 190 keywords={CBR, Case-based reasoning, Story generation},
booktitle={Case-Based Reasoning Research and Development}, 191 191 booktitle={Case-Based Reasoning Research and Development},
year={2015}, 192 192 year={2015},
publisher={Springer International Publishing}, 193 193 publisher={Springer International Publishing},
address={Cham}, 194 194 address={Cham},
language = {English}, 195 195 language = {English},
pages={290--305}, 196 196 pages={290--305},
type = {article; proceedings paper}, 197 197 type = {article; proceedings paper},
abstract={This paper presents a new approach to case revision in case-based reasoning based on the idea of argumentation. Previous work on case reuse has proposed the use of operations such as case amalgamation (or merging), which generate solutions by combining information coming from different cases. Such approaches are often based on exploring the search space of possible combinations looking for a solution that maximizes a certain criteria. We show how Revise can be performed by arguments attacking specific parts of a case produced by Reuse, and how they can guide and prevent repeating pitfalls in future cases. The proposed approach is evaluated in the task of automatic story generation.}, 198 198 abstract={This paper presents a new approach to case revision in case-based reasoning based on the idea of argumentation. Previous work on case reuse has proposed the use of operations such as case amalgamation (or merging), which generate solutions by combining information coming from different cases. Such approaches are often based on exploring the search space of possible combinations looking for a solution that maximizes a certain criteria. We show how Revise can be performed by arguments attacking specific parts of a case produced by Reuse, and how they can guide and prevent repeating pitfalls in future cases. The proposed approach is evaluated in the task of automatic story generation.},
isbn={978-3-319-24586-7} 199 199 isbn={978-3-319-24586-7}
} 200 200 }
201 201
@Inproceedings{10.1007/978-3-030-58342-2_20, 202 202 @Inproceedings{10.1007/978-3-030-58342-2_20,
author={Lepage, Yves 203 203 author={Lepage, Yves
and Lieber, Jean 204 204 and Lieber, Jean
and Mornard, Isabelle 205 205 and Mornard, Isabelle
and Nauer, Emmanuel 206 206 and Nauer, Emmanuel
and Romary, Julien 207 207 and Romary, Julien
and Sies, Reynault}, 208 208 and Sies, Reynault},
editor={Watson, Ian 209 209 editor={Watson, Ian
and Weber, Rosina}, 210 210 and Weber, Rosina},
title={The French Correction: When Retrieval Is Harder to Specify than Adaptation}, 211 211 title={The French Correction: When Retrieval Is Harder to Specify than Adaptation},
affiliation={Waseda University; Université de Lorraine}, 212 212 affiliation={Waseda University; Université de Lorraine},
keywords={case-based reasoning, retrieval, analogy, sentence correction}, 213 213 keywords={case-based reasoning, retrieval, analogy, sentence correction},
booktitle={Case-Based Reasoning Research and Development}, 214 214 booktitle={Case-Based Reasoning Research and Development},
year={2020}, 215 215 year={2020},
language = {English}, 216 216 language = {English},
type = {article; proceedings paper}, 217 217 type = {article; proceedings paper},
publisher={Springer International Publishing}, 218 218 publisher={Springer International Publishing},
address={Cham}, 219 219 address={Cham},
pages={309--324}, 220 220 pages={309--324},
abstract={A common idea in the field of case-based reasoning is that the retrieval step can be specified by the use of some similarity measure: the retrieved cases maximize the similarity to the target problem and, then, the adaptation step has to take into account the mismatches between the retrieved cases and the target problem in order to this latter. The use of this methodological schema for the application described in this paper has proven to be non efficient. Indeed, designing a retrieval procedure without the precise knowledge of the adaptation procedure has not been possible. The domain of this application is the correction of French sentences: a problem is an incorrect sentence and a valid solution is a correction of this problem. Adaptation consists in solving an analogical equation that enables to execute the correction of the retrieved case on the target problem. Thus, retrieval has to ensure that this application is feasible. The first version of such a retrieval procedure is described and evaluated: it is a knowledge-light procedure that does not use linguistic knowledge about French.}, 221 221 abstract={A common idea in the field of case-based reasoning is that the retrieval step can be specified by the use of some similarity measure: the retrieved cases maximize the similarity to the target problem and, then, the adaptation step has to take into account the mismatches between the retrieved cases and the target problem in order to this latter. The use of this methodological schema for the application described in this paper has proven to be non efficient. Indeed, designing a retrieval procedure without the precise knowledge of the adaptation procedure has not been possible. The domain of this application is the correction of French sentences: a problem is an incorrect sentence and a valid solution is a correction of this problem. Adaptation consists in solving an analogical equation that enables to execute the correction of the retrieved case on the target problem. Thus, retrieval has to ensure that this application is feasible. The first version of such a retrieval procedure is described and evaluated: it is a knowledge-light procedure that does not use linguistic knowledge about French.},
isbn={978-3-030-58342-2} 222 222 isbn={978-3-030-58342-2}
} 223 223 }
224 224
@Inproceedings{10.1007/978-3-030-01081-2_25, 225 225 @Inproceedings{10.1007/978-3-030-01081-2_25,
author={Smyth, Barry 226 226 author={Smyth, Barry
and Cunningham, P{\'a}draig}, 227 227 and Cunningham, P{\'a}draig},
editor={Cox, Michael T. 228 228 editor={Cox, Michael T.
and Funk, Peter 229 229 and Funk, Peter
and Begum, Shahina}, 230 230 and Begum, Shahina},
affiliation={University College Dublin}, 231 231 affiliation={University College Dublin},
title={An Analysis of Case Representations for Marathon Race Prediction and Planning}, 232 232 title={An Analysis of Case Representations for Marathon Race Prediction and Planning},
keywords={Marathon planning, Case representation, Case-based reasoning}, 233 233 keywords={Marathon planning, Case representation, Case-based reasoning},
booktitle={Case-Based Reasoning Research and Development}, 234 234 booktitle={Case-Based Reasoning Research and Development},
year={2018}, 235 235 year={2018},
language = {English}, 236 236 language = {English},
publisher={Springer International Publishing}, 237 237 publisher={Springer International Publishing},
address={Cham}, 238 238 address={Cham},
pages={369--384}, 239 239 pages={369--384},
type = {article; proceedings paper}, 240 240 type = {article; proceedings paper},
abstract={We use case-based reasoning to help marathoners achieve a personal best for an upcoming race, by helping them to select an achievable goal-time and a suitable pacing plan. We evaluate several case representations and, using real-world race data, highlight their performance implications. Richer representations do not always deliver better prediction performance, but certain representational configurations do offer very significant practical benefits for runners, when it comes to predicting, and planning for, challenging goal-times during an upcoming race.}, 241 241 abstract={We use case-based reasoning to help marathoners achieve a personal best for an upcoming race, by helping them to select an achievable goal-time and a suitable pacing plan. We evaluate several case representations and, using real-world race data, highlight their performance implications. Richer representations do not always deliver better prediction performance, but certain representational configurations do offer very significant practical benefits for runners, when it comes to predicting, and planning for, challenging goal-times during an upcoming race.},
isbn={978-3-030-01081-2} 242 242 isbn={978-3-030-01081-2}
} 243 243 }
244 244
@Inproceedings{10.1007/978-3-030-58342-2_8, 245 245 @Inproceedings{10.1007/978-3-030-58342-2_8,
author={Smyth, Barry 246 246 author={Smyth, Barry
and Willemsen, Martijn C.}, 247 247 and Willemsen, Martijn C.},
editor={Watson, Ian 248 248 editor={Watson, Ian
and Weber, Rosina}, 249 249 and Weber, Rosina},
affiliation={University College Dublin; Eindhoven University of Technology}, 250 250 affiliation={University College Dublin; Eindhoven University of Technology},
title={Predicting the Personal-Best Times of Speed Skaters Using Case-Based Reasoning}, 251 251 title={Predicting the Personal-Best Times of Speed Skaters Using Case-Based Reasoning},
keywords={CBR for health and exercise, speed skating, race-time prediction, case representation}, 252 252 keywords={CBR for health and exercise, speed skating, race-time prediction, case representation},
booktitle={Case-Based Reasoning Research and Development}, 253 253 booktitle={Case-Based Reasoning Research and Development},
year={2020}, 254 254 year={2020},
type = {article; proceedings paper}, 255 255 type = {article; proceedings paper},
language = {English}, 256 256 language = {English},
publisher={Springer International Publishing}, 257 257 publisher={Springer International Publishing},
address={Cham}, 258 258 address={Cham},
pages={112--126}, 259 259 pages={112--126},
abstract={Speed skating is a form of ice skating in which the skaters race each other over a variety of standardised distances. Races take place on specialised ice-rinks and the type of track and ice conditions can have a significant impact on race-times. As race distances increase, pacing also plays an important role. In this paper we seek to extend recent work on the application of case-based reasoning to marathon-time prediction by predicting race-times for speed skaters. In particular, we propose and evaluate a number of case-based reasoning variants based on different case and feature representations to generate track-specific race predictions. We show it is possible to improve upon state-of-the-art prediction accuracy by harnessing richer case representations using shorter races and track-adjusted finish and lap-times.}, 260 260 abstract={Speed skating is a form of ice skating in which the skaters race each other over a variety of standardised distances. Races take place on specialised ice-rinks and the type of track and ice conditions can have a significant impact on race-times. As race distances increase, pacing also plays an important role. In this paper we seek to extend recent work on the application of case-based reasoning to marathon-time prediction by predicting race-times for speed skaters. In particular, we propose and evaluate a number of case-based reasoning variants based on different case and feature representations to generate track-specific race predictions. We show it is possible to improve upon state-of-the-art prediction accuracy by harnessing richer case representations using shorter races and track-adjusted finish and lap-times.},
isbn={978-3-030-58342-2} 261 261 isbn={978-3-030-58342-2}
} 262 262 }
263 263
@Inproceedings{10.1007/978-3-030-58342-2_5, 264 264 @Inproceedings{10.1007/978-3-030-58342-2_5,
author={Feely, Ciara 265 265 author={Feely, Ciara
and Caulfield, Brian 266 266 and Caulfield, Brian
and Lawlor, Aonghus 267 267 and Lawlor, Aonghus
and Smyth, Barry}, 268 268 and Smyth, Barry},
editor={Watson, Ian 269 269 editor={Watson, Ian
and Weber, Rosina}, 270 270 and Weber, Rosina},
affiliation={University College Dublin}, 271 271 affiliation={University College Dublin},
title={Using Case-Based Reasoning to Predict Marathon Performance and Recommend Tailored Training Plans}, 272 272 title={Using Case-Based Reasoning to Predict Marathon Performance and Recommend Tailored Training Plans},
keywords={CBR for health and exercise, marathon running, race-time prediction, plan recommendation}, 273 273 keywords={CBR for health and exercise, marathon running, race-time prediction, plan recommendation},
booktitle={Case-Based Reasoning Research and Development}, 274 274 booktitle={Case-Based Reasoning Research and Development},
year={2020}, 275 275 year={2020},
language = {English}, 276 276 language = {English},
publisher={Springer International Publishing}, 277 277 publisher={Springer International Publishing},
address={Cham}, 278 278 address={Cham},
pages={67--81}, 279 279 pages={67--81},
type = {article; proceedings paper}, 280 280 type = {article; proceedings paper},
abstract={Training for the marathon, especially a first marathon, is always a challenge. Many runners struggle to find the right balance between their workouts and their recovery, often leading to sub-optimal performance on race-day or even injury during training. We describe and evaluate a novel case-based reasoning system to help marathon runners as they train in two ways. First, it uses a case-base of training/workouts and race histories to predict future marathon times for a target runner, throughout their training program, helping runners to calibrate their progress and, ultimately, plan their race-day pacing. Second, the system recommends tailored training plans to runners, adapted for their current goal-time target, and based on the training plans of similar runners who have achieved this time. We evaluate the system using a dataset of more than 21,000 unique runners and 1.5 million training/workout sessions.}, 281 281 abstract={Training for the marathon, especially a first marathon, is always a challenge. Many runners struggle to find the right balance between their workouts and their recovery, often leading to sub-optimal performance on race-day or even injury during training. We describe and evaluate a novel case-based reasoning system to help marathon runners as they train in two ways. First, it uses a case-base of training/workouts and race histories to predict future marathon times for a target runner, throughout their training program, helping runners to calibrate their progress and, ultimately, plan their race-day pacing. Second, the system recommends tailored training plans to runners, adapted for their current goal-time target, and based on the training plans of similar runners who have achieved this time. We evaluate the system using a dataset of more than 21,000 unique runners and 1.5 million training/workout sessions.},
isbn={978-3-030-58342-2} 282 282 isbn={978-3-030-58342-2}
} 283 283 }
284 284
@article{LALITHA2020583, 285 285 @article{LALITHA2020583,
title = {Personalised Self-Directed Learning Recommendation System}, 286 286 title = {Personalised Self-Directed Learning Recommendation System},
journal = {Procedia Computer Science}, 287 287 journal = {Procedia Computer Science},
volume = {171}, 288 288 volume = {171},
pages = {583-592}, 289 289 pages = {583-592},
year = {2020}, 290 290 year = {2020},
type = {article}, 291 291 type = {article},
language = {English}, 292 292 language = {English},
note = {Third International Conference on Computing and Network Communications (CoCoNet'19)}, 293 293 note = {Third International Conference on Computing and Network Communications (CoCoNet'19)},
issn = {1877-0509}, 294 294 issn = {1877-0509},
doi = {https://doi.org/10.1016/j.procs.2020.04.063}, 295 295 doi = {https://doi.org/10.1016/j.procs.2020.04.063},
url = {https://www.sciencedirect.com/science/article/pii/S1877050920310309}, 296 296 url = {https://www.sciencedirect.com/science/article/pii/S1877050920310309},
author = {T B Lalitha and P S Sreeja}, 297 297 author = {T B Lalitha and P S Sreeja},
affiliation={Hindustan Institute of Technology and Science}, 298 298 affiliation={Hindustan Institute of Technology and Science},
keywords = {e-Learning, PSDLR, Recommendation System, SDL, Self-Directed Learning}, 299 299 keywords = {e-Learning, PSDLR, Recommendation System, SDL, Self-Directed Learning},
abstract = {Modern educational systems have changed drastically bringing in knowledge anywhere as needed by the learner with the evolution of Internet. Availability of knowledge in public domain, capability of exchanging large amount of information and filtering relevant information quickly has enabled disruption to conventional educational system. Thus, future trends are looking towards E-Learning (Electronic Learning) and M-Learning (Mobile Learning) technologies over the Internet for their vast knowledge acquisition. In this paper, the work gives an elaborate context of learning strategies prevailing and emerging with the classification of e-learning Techniques. It majorly focuses on the features and variety of aspects with the e-learning and the choice of learning method involved and facilitate the adoption of new ways for personalized selection on learning resources for SDL (Self-Directed Learning) from the unstructured, large web-based environment. Thereby, proposes a Personalised Self-Directed Learning Recommendation System (PSDLR) based on the personal specifications of the SDL learner. The result offers insight into the perspectives and challenges of Self-Directed Learning based on cognitive and constructive characteristics which majorly incorporates web-based learning and gives path in finding appropriate solutions using machine learning techniques and ontology for the open problems in the respective fields with personalised recommendations and guidelines for future research.} 300 300 abstract = {Modern educational systems have changed drastically bringing in knowledge anywhere as needed by the learner with the evolution of Internet. Availability of knowledge in public domain, capability of exchanging large amount of information and filtering relevant information quickly has enabled disruption to conventional educational system. Thus, future trends are looking towards E-Learning (Electronic Learning) and M-Learning (Mobile Learning) technologies over the Internet for their vast knowledge acquisition. In this paper, the work gives an elaborate context of learning strategies prevailing and emerging with the classification of e-learning Techniques. It majorly focuses on the features and variety of aspects with the e-learning and the choice of learning method involved and facilitate the adoption of new ways for personalized selection on learning resources for SDL (Self-Directed Learning) from the unstructured, large web-based environment. Thereby, proposes a Personalised Self-Directed Learning Recommendation System (PSDLR) based on the personal specifications of the SDL learner. The result offers insight into the perspectives and challenges of Self-Directed Learning based on cognitive and constructive characteristics which majorly incorporates web-based learning and gives path in finding appropriate solutions using machine learning techniques and ontology for the open problems in the respective fields with personalised recommendations and guidelines for future research.}
} 301 301 }
302 302
@article{Zhou2021, 303 303 @article{Zhou2021,
author={Zhou, Lina 304 304 author={Zhou, Lina
and Wang, Chunxia}, 305 305 and Wang, Chunxia},
affiliation={Baotou Medical College}, 306 306 affiliation={Baotou Medical College},
title={Research on Recommendation of Personalized Exercises in English Learning Based on Data Mining}, 307 307 title={Research on Recommendation of Personalized Exercises in English Learning Based on Data Mining},
journal={Scientific Programming}, 308 308 journal={Scientific Programming},
year={2021}, 309 309 year={2021},
month={Dec}, 310 310 month={Dec},
type = {article}, 311 311 type = {article},
language = {English}, 312 312 language = {English},
day={21}, 313 313 day={21},
publisher={Hindawi}, 314 314 publisher={Hindawi},
keywords={Recommender systems, Learning}, 315 315 keywords={Recommender systems, Learning},
volume={2021}, 316 316 volume={2021},
pages={5042286}, 317 317 pages={5042286},
abstract={Aiming at the problems of traditional method of exercise recommendation precision, recall rate, long recommendation time, and poor recommendation comprehensiveness, this study proposes a personalized exercise recommendation method for English learning based on data mining. Firstly, a personalized recommendation model is designed, based on the model to preprocess the data in the Web access log, and cleaning the noise data to avoid its impact on the accuracy of the recommendation results is focused; secondly, the DINA model to diagnose the degree of mastery of students{\&}{\#}x2019; knowledge points is used and the students{\&}{\#}x2019; browsing patterns through fuzzy similar relationships are clustered; and finally, according to the clustering results, the similarity between students and the similarity between exercises are measured, and the collaborative filtering recommendation of personalized exercises for English learning is realized. The experimental results show that the exercise recommendation precision and recall rate of this method are higher, the recommendation time is shorter, and the recommendation results are comprehensive.}, 318 318 abstract={Aiming at the problems of traditional method of exercise recommendation precision, recall rate, long recommendation time, and poor recommendation comprehensiveness, this study proposes a personalized exercise recommendation method for English learning based on data mining. Firstly, a personalized recommendation model is designed, based on the model to preprocess the data in the Web access log, and cleaning the noise data to avoid its impact on the accuracy of the recommendation results is focused; secondly, the DINA model to diagnose the degree of mastery of students{\&}{\#}x2019; knowledge points is used and the students{\&}{\#}x2019; browsing patterns through fuzzy similar relationships are clustered; and finally, according to the clustering results, the similarity between students and the similarity between exercises are measured, and the collaborative filtering recommendation of personalized exercises for English learning is realized. The experimental results show that the exercise recommendation precision and recall rate of this method are higher, the recommendation time is shorter, and the recommendation results are comprehensive.},
issn={1058-9244}, 319 319 issn={1058-9244},
doi={10.1155/2021/5042286}, 320 320 doi={10.1155/2021/5042286},
url={https://doi.org/10.1155/2021/5042286} 321 321 url={https://doi.org/10.1155/2021/5042286}
} 322 322 }
323 323
@article{INGKAVARA2022100086, 324 324 @article{INGKAVARA2022100086,
title = {The use of a personalized learning approach to implementing self-regulated online learning}, 325 325 title = {The use of a personalized learning approach to implementing self-regulated online learning},
journal = {Computers and Education: Artificial Intelligence}, 326 326 journal = {Computers and Education: Artificial Intelligence},
volume = {3}, 327 327 volume = {3},
pages = {100086}, 328 328 pages = {100086},
type = {article}, 329 329 type = {article},
language = {English}, 330 330 language = {English},
year = {2022}, 331 331 year = {2022},
issn = {2666-920X}, 332 332 issn = {2666-920X},
doi = {https://doi.org/10.1016/j.caeai.2022.100086}, 333 333 doi = {https://doi.org/10.1016/j.caeai.2022.100086},
url = {https://www.sciencedirect.com/science/article/pii/S2666920X22000418}, 334 334 url = {https://www.sciencedirect.com/science/article/pii/S2666920X22000418},
author = {Thanyaluck Ingkavara and Patcharin Panjaburee and Niwat Srisawasdi and Suthiporn Sajjapanroj}, 335 335 author = {Thanyaluck Ingkavara and Patcharin Panjaburee and Niwat Srisawasdi and Suthiporn Sajjapanroj},
keywords = {Intelligent tutoring system, Personalization, Adaptive learning, E-learning, TAM, Artificial intelligence}, 336 336 keywords = {Intelligent tutoring system, Personalization, Adaptive learning, E-learning, TAM, Artificial intelligence},
abstract = {Nowadays, students are encouraged to learn via online learning systems to promote students' autonomy. Scholars have found that students' self-regulated actions impact their academic success in an online learning environment. However, because traditional online learning systems cannot personalize feedback to the student's personality, most students have less chance to obtain helpful suggestions for enhancing their knowledge linked to their learning problems. This paper incorporated self-regulated online learning in the Physics classroom and used a personalized learning approach to help students receive proper learning paths and material corresponding to their learning preferences. This study conducted a quasi-experimental design using a quantitative approach to evaluate the effectiveness of the proposed learning environment in secondary schools. The experimental group of students participated in self-regulated online learning with a personalized learning approach, while the control group participated in conventional self-regulated online learning. The experimental results showed that the experimental group's post-test and the learning-gain score of the experimental group were significantly higher than those of the control group. Moreover, the results also suggested that the student's perceptions about the usefulness of learning suggestions, ease of use, goal setting, learning environmental structuring, task strategies, time management, self-evaluation, impact on learning, and attitude toward the learning environment are important predictors of behavioral intention to learn with the self-regulated online learning that integrated with the personalized learning approach.} 337 337 abstract = {Nowadays, students are encouraged to learn via online learning systems to promote students' autonomy. Scholars have found that students' self-regulated actions impact their academic success in an online learning environment. However, because traditional online learning systems cannot personalize feedback to the student's personality, most students have less chance to obtain helpful suggestions for enhancing their knowledge linked to their learning problems. This paper incorporated self-regulated online learning in the Physics classroom and used a personalized learning approach to help students receive proper learning paths and material corresponding to their learning preferences. This study conducted a quasi-experimental design using a quantitative approach to evaluate the effectiveness of the proposed learning environment in secondary schools. The experimental group of students participated in self-regulated online learning with a personalized learning approach, while the control group participated in conventional self-regulated online learning. The experimental results showed that the experimental group's post-test and the learning-gain score of the experimental group were significantly higher than those of the control group. Moreover, the results also suggested that the student's perceptions about the usefulness of learning suggestions, ease of use, goal setting, learning environmental structuring, task strategies, time management, self-evaluation, impact on learning, and attitude toward the learning environment are important predictors of behavioral intention to learn with the self-regulated online learning that integrated with the personalized learning approach.}
} 338 338 }
339 339
@article{HUANG2023104684, 340 340 @article{HUANG2023104684,
title = {Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom}, 341 341 title = {Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom},
journal = {Computers and Education}, 342 342 journal = {Computers and Education},
volume = {194}, 343 343 volume = {194},
pages = {104684}, 344 344 pages = {104684},
year = {2023}, 345 345 year = {2023},
language = {English}, 346 346 language = {English},
type = {article}, 347 347 type = {article},
issn = {0360-1315}, 348 348 issn = {0360-1315},
doi = {https://doi.org/10.1016/j.compedu.2022.104684}, 349 349 doi = {https://doi.org/10.1016/j.compedu.2022.104684},
url = {https://www.sciencedirect.com/science/article/pii/S036013152200255X}, 350 350 url = {https://www.sciencedirect.com/science/article/pii/S036013152200255X},
author = {Anna Y.Q. Huang and Owen H.T. Lu and Stephen J.H. Yang}, 351 351 author = {Anna Y.Q. Huang and Owen H.T. Lu and Stephen J.H. Yang},
keywords = {Data science applications in education, Distance education and online learning, Improving classroom teaching}, 352 352 keywords = {Data science applications in education, Distance education and online learning, Improving classroom teaching},
abstract = {The flipped classroom approach is aimed at improving learning outcomes by promoting learning motivation and engagement. Recommendation systems can also be used to improve learning outcomes. With the rapid development of artificial intelligence (AI) technology, various systems have been developed to facilitate student learning. Accordingly, we applied AI-enabled personalized video recommendations to stimulate students' learning motivation and engagement during a systems programming course in a flipped classroom setting. We assigned students to control and experimental groups comprising 59 and 43 college students, respectively. The students in both groups received flipped classroom instruction, but only those in the experimental group received AI-enabled personalized video recommendations. We quantitatively measured students’ engagement based on their learning profiles in a learning management system. The results revealed that the AI-enabled personalized video recommendations could significantly improve the learning performance and engagement of students with a moderate motivation level.} 353 353 abstract = {The flipped classroom approach is aimed at improving learning outcomes by promoting learning motivation and engagement. Recommendation systems can also be used to improve learning outcomes. With the rapid development of artificial intelligence (AI) technology, various systems have been developed to facilitate student learning. Accordingly, we applied AI-enabled personalized video recommendations to stimulate students' learning motivation and engagement during a systems programming course in a flipped classroom setting. We assigned students to control and experimental groups comprising 59 and 43 college students, respectively. The students in both groups received flipped classroom instruction, but only those in the experimental group received AI-enabled personalized video recommendations. We quantitatively measured students’ engagement based on their learning profiles in a learning management system. The results revealed that the AI-enabled personalized video recommendations could significantly improve the learning performance and engagement of students with a moderate motivation level.}
} 354 354 }
355 355
@article{ZHAO2023118535, 356 356 @article{ZHAO2023118535,
title = {A recommendation system for effective learning strategies: An integrated approach using context-dependent DEA}, 357 357 title = {A recommendation system for effective learning strategies: An integrated approach using context-dependent DEA},
journal = {Expert Systems with Applications}, 358 358 journal = {Expert Systems with Applications},
volume = {211}, 359 359 volume = {211},
pages = {118535}, 360 360 pages = {118535},
year = {2023}, 361 361 year = {2023},
language = {English}, 362 362 language = {English},
type = {article}, 363 363 type = {article},
issn = {0957-4174}, 364 364 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2022.118535}, 365 365 doi = {https://doi.org/10.1016/j.eswa.2022.118535},
url = {https://www.sciencedirect.com/science/article/pii/S0957417422016104}, 366 366 url = {https://www.sciencedirect.com/science/article/pii/S0957417422016104},
author = {Lu-Tao Zhao and Dai-Song Wang and Feng-Yun Liang and Jian Chen}, 367 367 author = {Lu-Tao Zhao and Dai-Song Wang and Feng-Yun Liang and Jian Chen},
keywords = {Recommendation system, Learning strategies, Context-dependent DEA, Efficiency analysis}, 368 368 keywords = {Recommendation system, Learning strategies, Context-dependent DEA, Efficiency analysis},
abstract = {Universities have been focusing on increasing individualized training and providing appropriate education for students. The individual differences and learning needs of college students should be given enough attention. From the perspective of learning efficiency, we establish a clustering hierarchical progressive improvement model (CHPI), which is based on cluster analysis and context-dependent data envelopment analysis (DEA) methods. The CHPI clusters students' ontological features, employs the context-dependent DEA method to stratify students of different classes, and calculates measures, such as obstacles, to determine the reference path for individuals with inefficient learning processes. The learning strategies are determined according to the gap between the inefficient individual to be improved and the individuals on the reference path. By the study of college English courses as an example, it is found that the CHPI can accurately recommend targeted learning strategies to satisfy the individual needs of college students so that the learning of individuals with inefficient learning processes in a certain stage can be effectively improved. In addition, CHPI can provide specific, efficient suggestions to improve learning efficiency comparing to existing recommendation systems, and has great potential in promoting the integration of education-related researches and expert systems.} 369 369 abstract = {Universities have been focusing on increasing individualized training and providing appropriate education for students. The individual differences and learning needs of college students should be given enough attention. From the perspective of learning efficiency, we establish a clustering hierarchical progressive improvement model (CHPI), which is based on cluster analysis and context-dependent data envelopment analysis (DEA) methods. The CHPI clusters students' ontological features, employs the context-dependent DEA method to stratify students of different classes, and calculates measures, such as obstacles, to determine the reference path for individuals with inefficient learning processes. The learning strategies are determined according to the gap between the inefficient individual to be improved and the individuals on the reference path. By the study of college English courses as an example, it is found that the CHPI can accurately recommend targeted learning strategies to satisfy the individual needs of college students so that the learning of individuals with inefficient learning processes in a certain stage can be effectively improved. In addition, CHPI can provide specific, efficient suggestions to improve learning efficiency comparing to existing recommendation systems, and has great potential in promoting the integration of education-related researches and expert systems.}
} 370 370 }
371 371
@article{SU2022109547, 372 372 @article{SU2022109547,
title = {Graph-based cognitive diagnosis for intelligent tutoring systems}, 373 373 title = {Graph-based cognitive diagnosis for intelligent tutoring systems},
journal = {Knowledge-Based Systems}, 374 374 journal = {Knowledge-Based Systems},
volume = {253}, 375 375 volume = {253},
pages = {109547}, 376 376 pages = {109547},
year = {2022}, 377 377 year = {2022},
language = {English}, 378 378 language = {English},
type = {article}, 379 379 type = {article},
issn = {0950-7051}, 380 380 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2022.109547}, 381 381 doi = {https://doi.org/10.1016/j.knosys.2022.109547},
url = {https://www.sciencedirect.com/science/article/pii/S095070512200778X}, 382 382 url = {https://www.sciencedirect.com/science/article/pii/S095070512200778X},
author = {Yu Su and Zeyu Cheng and Jinze Wu and Yanmin Dong and Zhenya Huang and Le Wu and Enhong Chen and Shijin Wang and Fei Xie}, 383 383 author = {Yu Su and Zeyu Cheng and Jinze Wu and Yanmin Dong and Zhenya Huang and Le Wu and Enhong Chen and Shijin Wang and Fei Xie},
keywords = {Cognitive diagnosis, Graph neural networks, Interpretable machine learning}, 384 384 keywords = {Cognitive diagnosis, Graph neural networks, Interpretable machine learning},
abstract = {For intelligent tutoring systems, Cognitive Diagnosis (CD) is a fundamental task that aims to estimate the mastery degree of a student on each skill according to the exercise record. The CD task is considered rather challenging since we need to model inner-relations and inter-relations among students, skills, and questions to obtain more abundant information. Most existing methods attempt to solve this problem through two-way interactions between students and questions (or between students and skills), ignoring potential high-order relations among entities. Furthermore, how to construct an end-to-end framework that can model the complex interactions among different types of entities at the same time remains unexplored. Therefore, in this paper, we propose a graph-based Cognitive Diagnosis model (GCDM) that directly discovers the interactions among students, skills, and questions through a heterogeneous cognitive graph. Specifically, we design two graph-based layers: a performance-relative propagator and an attentive knowledge aggregator. The former is applied to propagate a student’s cognitive state through different types of graph edges, while the latter selectively gathers messages from neighboring graph nodes. Extensive experimental results on two real-world datasets clearly show the effectiveness and extendibility of our proposed model.} 385 385 abstract = {For intelligent tutoring systems, Cognitive Diagnosis (CD) is a fundamental task that aims to estimate the mastery degree of a student on each skill according to the exercise record. The CD task is considered rather challenging since we need to model inner-relations and inter-relations among students, skills, and questions to obtain more abundant information. Most existing methods attempt to solve this problem through two-way interactions between students and questions (or between students and skills), ignoring potential high-order relations among entities. Furthermore, how to construct an end-to-end framework that can model the complex interactions among different types of entities at the same time remains unexplored. Therefore, in this paper, we propose a graph-based Cognitive Diagnosis model (GCDM) that directly discovers the interactions among students, skills, and questions through a heterogeneous cognitive graph. Specifically, we design two graph-based layers: a performance-relative propagator and an attentive knowledge aggregator. The former is applied to propagate a student’s cognitive state through different types of graph edges, while the latter selectively gathers messages from neighboring graph nodes. Extensive experimental results on two real-world datasets clearly show the effectiveness and extendibility of our proposed model.}
} 386 386 }
387 387
@article{EZALDEEN2022100700, 388 388 @article{EZALDEEN2022100700,
title = {A hybrid E-learning recommendation integrating adaptive profiling and sentiment analysis}, 389 389 title = {A hybrid E-learning recommendation integrating adaptive profiling and sentiment analysis},
journal = {Journal of Web Semantics}, 390 390 journal = {Journal of Web Semantics},
volume = {72}, 391 391 volume = {72},
pages = {100700}, 392 392 pages = {100700},
year = {2022}, 393 393 year = {2022},
type = {article}, 394 394 type = {article},
language = {English}, 395 395 language = {English},
issn = {1570-8268}, 396 396 issn = {1570-8268},
doi = {https://doi.org/10.1016/j.websem.2021.100700}, 397 397 doi = {https://doi.org/10.1016/j.websem.2021.100700},
url = {https://www.sciencedirect.com/science/article/pii/S1570826821000664}, 398 398 url = {https://www.sciencedirect.com/science/article/pii/S1570826821000664},
author = {Hadi Ezaldeen and Rachita Misra and Sukant Kishoro Bisoy and Rawaa Alatrash and Rojalina Priyadarshini}, 399 399 author = {Hadi Ezaldeen and Rachita Misra and Sukant Kishoro Bisoy and Rawaa Alatrash and Rojalina Priyadarshini},
keywords = {Hybrid E-learning recommendation, Adaptive profiling, Semantic learner profile, Fine-grained sentiment analysis, Convolutional Neural Network, Word embeddings}, 400 400 keywords = {Hybrid E-learning recommendation, Adaptive profiling, Semantic learner profile, Fine-grained sentiment analysis, Convolutional Neural Network, Word embeddings},
abstract = {This research proposes a novel framework named Enhanced e-Learning Hybrid Recommender System (ELHRS) that provides an appropriate e-content with the highest predicted ratings corresponding to the learner’s particular needs. To accomplish this, a new model is developed to deduce the Semantic Learner Profile automatically. It adaptively associates the learning patterns and rules depending on the learner’s behavior and the semantic relations computed in the semantic matrix that mutually links e-learning materials and terms. Here, a semantic-based approach for term expansion is introduced using DBpedia and WordNet ontologies. Further, various sentiment analysis models are proposed and incorporated as a part of the recommender system to predict ratings of e-learning resources from posted text reviews utilizing fine-grained sentiment classification on five discrete classes. Qualitative Natural Language Processing (NLP) methods with tailored-made Convolutional Neural Network (CNN) are developed and evaluated on our customized dataset collected for a specific domain and a public dataset. Two improved language models are introduced depending on Skip-Gram (S-G) and Continuous Bag of Words (CBOW) techniques. In addition, a robust language model based on hybridization of these couple of methods is developed to derive better vocabulary representation, yielding better accuracy 89.1% for the CNN-Three-Channel-Concatenation model. The suggested recommendation methodology depends on the learner’s preferences, other similar learners’ experience and background, deriving their opinions from the reviews towards the best learning resources. This assists the learners in finding the desired e-content at the proper time.} 401 401 abstract = {This research proposes a novel framework named Enhanced e-Learning Hybrid Recommender System (ELHRS) that provides an appropriate e-content with the highest predicted ratings corresponding to the learner’s particular needs. To accomplish this, a new model is developed to deduce the Semantic Learner Profile automatically. It adaptively associates the learning patterns and rules depending on the learner’s behavior and the semantic relations computed in the semantic matrix that mutually links e-learning materials and terms. Here, a semantic-based approach for term expansion is introduced using DBpedia and WordNet ontologies. Further, various sentiment analysis models are proposed and incorporated as a part of the recommender system to predict ratings of e-learning resources from posted text reviews utilizing fine-grained sentiment classification on five discrete classes. Qualitative Natural Language Processing (NLP) methods with tailored-made Convolutional Neural Network (CNN) are developed and evaluated on our customized dataset collected for a specific domain and a public dataset. Two improved language models are introduced depending on Skip-Gram (S-G) and Continuous Bag of Words (CBOW) techniques. In addition, a robust language model based on hybridization of these couple of methods is developed to derive better vocabulary representation, yielding better accuracy 89.1% for the CNN-Three-Channel-Concatenation model. The suggested recommendation methodology depends on the learner’s preferences, other similar learners’ experience and background, deriving their opinions from the reviews towards the best learning resources. This assists the learners in finding the desired e-content at the proper time.}
} 402 402 }
403 403
@article{MUANGPRATHUB2020e05227, 404 404 @article{MUANGPRATHUB2020e05227,
title = {Learning recommendation with formal concept analysis for intelligent tutoring system}, 405 405 title = {Learning recommendation with formal concept analysis for intelligent tutoring system},
journal = {Heliyon}, 406 406 journal = {Heliyon},
volume = {6}, 407 407 volume = {6},
number = {10}, 408 408 number = {10},
pages = {e05227}, 409 409 pages = {e05227},
language = {English}, 410 410 language = {English},
type = {article}, 411 411 type = {article},
year = {2020}, 412 412 year = {2020},
issn = {2405-8440}, 413 413 issn = {2405-8440},
doi = {https://doi.org/10.1016/j.heliyon.2020.e05227}, 414 414 doi = {https://doi.org/10.1016/j.heliyon.2020.e05227},
url = {https://www.sciencedirect.com/science/article/pii/S2405844020320703}, 415 415 url = {https://www.sciencedirect.com/science/article/pii/S2405844020320703},
author = {Jirapond Muangprathub and Veera Boonjing and Kosin Chamnongthai}, 416 416 author = {Jirapond Muangprathub and Veera Boonjing and Kosin Chamnongthai},
keywords = {Computer Science, Learning recommendation, Formal concept analysis, Intelligent tutoring system, Adaptive learning}, 417 417 keywords = {Computer Science, Learning recommendation, Formal concept analysis, Intelligent tutoring system, Adaptive learning},
abstract = {The aim of this research was to develop a learning recommendation component in an intelligent tutoring system (ITS) that dynamically predicts and adapts to a learner's style. In order to develop a proper ITS, we present an improved knowledge base supporting adaptive learning, which can be achieved by a suitable knowledge construction. This process is illustrated by implementing a web-based online tutor system. In addition, our knowledge structure provides adaptive presentation and personalized learning with the proposed adaptive algorithm, to retrieve content according to individual learner characteristics. To demonstrate the proposed adaptive algorithm, pre-test and post-test were used to evaluate suggestion accuracy of the course in a class for adapting to a learner's style. In addition, pre- and post-testing were also used with students in a real teaching/learning environment to evaluate the performance of the proposed model. The results show that the proposed system can be used to help students or learners achieve improved learning.} 418 418 abstract = {The aim of this research was to develop a learning recommendation component in an intelligent tutoring system (ITS) that dynamically predicts and adapts to a learner's style. In order to develop a proper ITS, we present an improved knowledge base supporting adaptive learning, which can be achieved by a suitable knowledge construction. This process is illustrated by implementing a web-based online tutor system. In addition, our knowledge structure provides adaptive presentation and personalized learning with the proposed adaptive algorithm, to retrieve content according to individual learner characteristics. To demonstrate the proposed adaptive algorithm, pre-test and post-test were used to evaluate suggestion accuracy of the course in a class for adapting to a learner's style. In addition, pre- and post-testing were also used with students in a real teaching/learning environment to evaluate the performance of the proposed model. The results show that the proposed system can be used to help students or learners achieve improved learning.}
} 419 419 }
420 420
@article{min8100434, 421 421 @article{min8100434,
author = {Leikola, Maria and Sauer, Christian and Rintala, Lotta and Aromaa, Jari and Lundström, Mari}, 422 422 author = {Leikola, Maria and Sauer, Christian and Rintala, Lotta and Aromaa, Jari and Lundström, Mari},
title = {Assessing the Similarity of Cyanide-Free Gold Leaching Processes: A Case-Based Reasoning Application}, 423 423 title = {Assessing the Similarity of Cyanide-Free Gold Leaching Processes: A Case-Based Reasoning Application},
journal = {Minerals}, 424 424 journal = {Minerals},
volume = {8}, 425 425 volume = {8},
type = {article}, 426 426 type = {article},
language = {English}, 427 427 language = {English},
year = {2018}, 428 428 year = {2018},
number = {10}, 429 429 number = {10},
url = {https://www.mdpi.com/2075-163X/8/10/434}, 430 430 url = {https://www.mdpi.com/2075-163X/8/10/434},
issn = {2075-163X}, 431 431 issn = {2075-163X},
keywords={hydrometallurgy, cyanide-free gold, knowledge modelling, case-based reasoning, information retrieval}, 432 432 keywords={hydrometallurgy, cyanide-free gold, knowledge modelling, case-based reasoning, information retrieval},
abstract = {Hydrometallurgical researchers, and other professionals alike, invest significant amounts of time reading scientific articles, technical notes, and other scientific documents, while looking for the most relevant information for their particular research interest. In an attempt to save the researcher’s time, this study presents an information retrieval tool using case-based reasoning. The tool was built for comparing scientific articles concerning cyanide-free leaching of gold ores/concentrates/tailings. Altogether, 50 cases of experiments were gathered in a case base. 15 different attributes related to the treatment of the raw material and the leaching conditions were selected to compare the cases. The attributes were as follows: Pretreatment, Overall method, Complexant source, Oxidant source, Complexant concentration, Oxidant concentration, Temperature, pH, Redox-potential, Pressure, Materials of construction, Extraction, Extraction rate, Reagent consumption, and Solid-liquid ratio. The resulting retrieval tool (LeachSim) was able to rank the scientific articles according to their similarity with the user’s research interest. Such a tool could eventually aid the user in finding the most relevant information, but not replace thorough understanding and human expertise.}, 433 433 abstract = {Hydrometallurgical researchers, and other professionals alike, invest significant amounts of time reading scientific articles, technical notes, and other scientific documents, while looking for the most relevant information for their particular research interest. In an attempt to save the researcher’s time, this study presents an information retrieval tool using case-based reasoning. The tool was built for comparing scientific articles concerning cyanide-free leaching of gold ores/concentrates/tailings. Altogether, 50 cases of experiments were gathered in a case base. 15 different attributes related to the treatment of the raw material and the leaching conditions were selected to compare the cases. The attributes were as follows: Pretreatment, Overall method, Complexant source, Oxidant source, Complexant concentration, Oxidant concentration, Temperature, pH, Redox-potential, Pressure, Materials of construction, Extraction, Extraction rate, Reagent consumption, and Solid-liquid ratio. The resulting retrieval tool (LeachSim) was able to rank the scientific articles according to their similarity with the user’s research interest. Such a tool could eventually aid the user in finding the most relevant information, but not replace thorough understanding and human expertise.},
doi = {10.3390/min8100434} 434 434 doi = {10.3390/min8100434}
} 435 435 }
436 436
@article{10.1145/3459665, 437 437 @article{10.1145/3459665,
author = {Cunningham, P\'{a}draig and Delany, Sarah Jane}, 438 438 author = {Cunningham, P\'{a}draig and Delany, Sarah Jane},
title = {K-Nearest Neighbour Classifiers - A Tutorial}, 439 439 title = {K-Nearest Neighbour Classifiers - A Tutorial},
year = {2021}, 440 440 year = {2021},
issue_date = {July 2022}, 441 441 issue_date = {July 2022},
publisher = {Association for Computing Machinery}, 442 442 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 443 443 address = {New York, NY, USA},
type={article}, 444 444 type={article},
language={English}, 445 445 language={English},
volume = {54}, 446 446 volume = {54},
number = {6}, 447 447 number = {6},
issn = {0360-0300}, 448 448 issn = {0360-0300},
url = {https://doi.org/10.1145/3459665}, 449 449 url = {https://doi.org/10.1145/3459665},
doi = {10.1145/3459665}, 450 450 doi = {10.1145/3459665},
abstract = {Perhaps the most straightforward classifier in the arsenal or Machine Learning techniques is the Nearest Neighbour Classifier—classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance, because issues of poor runtime performance is not such a problem these days with the computational power that is available. This article presents an overview of techniques for Nearest Neighbour classification focusing on: mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours, and mechanisms for reducing the dimension of the data.This article is the second edition of a paper previously published as a technical report [16]. Sections on similarity measures for time-series, retrieval speedup, and intrinsic dimensionality have been added. An Appendix is included, providing access to Python code for the key methods.}, 451 451 abstract = {Perhaps the most straightforward classifier in the arsenal or Machine Learning techniques is the Nearest Neighbour Classifier—classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance, because issues of poor runtime performance is not such a problem these days with the computational power that is available. This article presents an overview of techniques for Nearest Neighbour classification focusing on: mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours, and mechanisms for reducing the dimension of the data.This article is the second edition of a paper previously published as a technical report [16]. Sections on similarity measures for time-series, retrieval speedup, and intrinsic dimensionality have been added. An Appendix is included, providing access to Python code for the key methods.},
journal = {ACM Comput. Surv.}, 452 452 journal = {ACM Comput. Surv.},
month = {jul}, 453 453 month = {jul},
articleno = {128}, 454 454 articleno = {128},
numpages = {25}, 455 455 numpages = {25},
keywords = {k-Nearest neighbour classifiers} 456 456 keywords = {k-Nearest neighbour classifiers}
} 457 457 }
458 458
@article{9072123, 459 459 @article{9072123,
author={Sinaga, Kristina P. and Yang, Miin-Shen}, 460 460 author={Sinaga, Kristina P. and Yang, Miin-Shen},
journal={IEEE Access}, 461 461 journal={IEEE Access},
type={article}, 462 462 type={article},
language={English}, 463 463 language={English},
title={Unsupervised K-Means Clustering Algorithm}, 464 464 title={Unsupervised K-Means Clustering Algorithm},
year={2020}, 465 465 year={2020},
volume={8}, 466 466 volume={8},
number={}, 467 467 number={},
pages={80716-80727}, 468 468 pages={80716-80727},
doi={10.1109/ACCESS.2020.2988796} 469 469 doi={10.1109/ACCESS.2020.2988796}
} 470 470 }
471 471
@article{WANG2021331, 472 472 @article{WANG2021331,
title = {A new prediction strategy for dynamic multi-objective optimization using Gaussian Mixture Model}, 473 473 title = {A new prediction strategy for dynamic multi-objective optimization using Gaussian Mixture Model},
journal = {Information Sciences}, 474 474 journal = {Information Sciences},
volume = {580}, 475 475 volume = {580},
type = {article}, 476 476 type = {article},
language = {English}, 477 477 language = {English},
pages = {331-351}, 478 478 pages = {331-351},
year = {2021}, 479 479 year = {2021},
issn = {0020-0255}, 480 480 issn = {0020-0255},
doi = {https://doi.org/10.1016/j.ins.2021.08.065}, 481 481 doi = {https://doi.org/10.1016/j.ins.2021.08.065},
url = {https://www.sciencedirect.com/science/article/pii/S0020025521008732}, 482 482 url = {https://www.sciencedirect.com/science/article/pii/S0020025521008732},
author = {Feng Wang and Fanshu Liao and Yixuan Li and Hui Wang}, 483 483 author = {Feng Wang and Fanshu Liao and Yixuan Li and Hui Wang},
keywords = {Dynamic multi-objective optimization, Gaussian Mixture Model, Change type detection, Resampling}, 484 484 keywords = {Dynamic multi-objective optimization, Gaussian Mixture Model, Change type detection, Resampling},
abstract = {Dynamic multi-objective optimization problems (DMOPs), in which the environments change over time, have attracted many researchers’ attention in recent years. Since the Pareto set (PS) or the Pareto front (PF) can change over time, how to track the movement of the PS or PF is a challenging problem in DMOPs. Over the past few years, lots of methods have been proposed, and the prediction based strategy has been considered the most effective way to track the new PS. However, the performance of most existing prediction strategies depends greatly on the quantity and quality of the historical information and will deteriorate due to non-linear changes, leading to poor results. In this paper, we propose a new prediction method, named MOEA/D-GMM, which incorporates the Gaussian Mixture Model (GMM) into the MOEA/D framework for the prediction of the new PS when changes occur. Since GMM is a powerful non-linear model to accurately fit various data distributions, it can effectively generate solutions with better quality according to the distributions. In the proposed algorithm, a change type detection strategy is first designed to estimate an approximate PS according to different change types. Then, GMM is employed to make a more accurate prediction by training it with the approximate PS. To overcome the shortcoming of a lack of training solutions for GMM, the Empirical Cumulative Distribution Function (ECDF) method is used to resample more training solutions before GMM training. Experimental results on various benchmark test problems and a classical real-world problem show that, compared with some state-of-the-art dynamic optimization algorithms, MOEA/D-GMM outperforms others in most cases.} 485 485 abstract = {Dynamic multi-objective optimization problems (DMOPs), in which the environments change over time, have attracted many researchers’ attention in recent years. Since the Pareto set (PS) or the Pareto front (PF) can change over time, how to track the movement of the PS or PF is a challenging problem in DMOPs. Over the past few years, lots of methods have been proposed, and the prediction based strategy has been considered the most effective way to track the new PS. However, the performance of most existing prediction strategies depends greatly on the quantity and quality of the historical information and will deteriorate due to non-linear changes, leading to poor results. In this paper, we propose a new prediction method, named MOEA/D-GMM, which incorporates the Gaussian Mixture Model (GMM) into the MOEA/D framework for the prediction of the new PS when changes occur. Since GMM is a powerful non-linear model to accurately fit various data distributions, it can effectively generate solutions with better quality according to the distributions. In the proposed algorithm, a change type detection strategy is first designed to estimate an approximate PS according to different change types. Then, GMM is employed to make a more accurate prediction by training it with the approximate PS. To overcome the shortcoming of a lack of training solutions for GMM, the Empirical Cumulative Distribution Function (ECDF) method is used to resample more training solutions before GMM training. Experimental results on various benchmark test problems and a classical real-world problem show that, compared with some state-of-the-art dynamic optimization algorithms, MOEA/D-GMM outperforms others in most cases.}
} 486 486 }
487 487
@article{9627973, 488 488 @article{9627973,
author={Xu, Shengbing and Cai, Wei and Xia, Hongxi and Liu, Bo and Xu, Jie}, 489 489 author={Xu, Shengbing and Cai, Wei and Xia, Hongxi and Liu, Bo and Xu, Jie},
journal={IEEE Access}, 490 490 journal={IEEE Access},
title={Dynamic Metric Accelerated Method for Fuzzy Clustering}, 491 491 title={Dynamic Metric Accelerated Method for Fuzzy Clustering},
year={2021}, 492 492 year={2021},
type={article}, 493 493 type={article},
language={English}, 494 494 language={English},
volume={9}, 495 495 volume={9},
number={}, 496 496 number={},
pages={166838-166854}, 497 497 pages={166838-166854},
doi={10.1109/ACCESS.2021.3131368} 498 498 doi={10.1109/ACCESS.2021.3131368}
} 499 499 }
500 500
@article{9434422, 501 501 @article{9434422,
author={Gupta, Samarth and Chaudhari, Shreyas and Joshi, Gauri and Yağan, Osman}, 502 502 author={Gupta, Samarth and Chaudhari, Shreyas and Joshi, Gauri and Yağan, Osman},
journal={IEEE Transactions on Information Theory}, 503 503 journal={IEEE Transactions on Information Theory},
title={Multi-Armed Bandits With Correlated Arms}, 504 504 title={Multi-Armed Bandits With Correlated Arms},
year={2021}, 505 505 year={2021},
language={English}, 506 506 language={English},
type={article}, 507 507 type={article},
volume={67}, 508 508 volume={67},
number={10}, 509 509 number={10},
pages={6711-6732}, 510 510 pages={6711-6732},
doi={10.1109/TIT.2021.3081508} 511 511 doi={10.1109/TIT.2021.3081508}
} 512 512 }
513 513
@Inproceedings{8495930, 514 514 @Inproceedings{8495930,
author={Supic, H.}, 515 515 author={Supic, H.},
booktitle={2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)}, 516 516 booktitle={2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)},
title={Case-Based Reasoning Model for Personalized Learning Path Recommendation in Example-Based Learning Activities}, 517 517 title={Case-Based Reasoning Model for Personalized Learning Path Recommendation in Example-Based Learning Activities},
year={2018}, 518 518 year={2018},
type={article}, 519 519 type={article},
language={English}, 520 520 language={English},
volume={}, 521 521 volume={},
number={}, 522 522 number={},
pages={175-178}, 523 523 pages={175-178},
doi={10.1109/WETICE.2018.00040} 524 524 doi={10.1109/WETICE.2018.00040}
} 525 525 }
526 526
@Inproceedings{9870279, 527 527 @Inproceedings{9870279,
author={Lin, Baihan}, 528 528 author={Lin, Baihan},
booktitle={2022 IEEE Congress on Evolutionary Computation (CEC)}, 529 529 booktitle={2022 IEEE Congress on Evolutionary Computation (CEC)},
title={Evolutionary Multi-Armed Bandits with Genetic Thompson Sampling}, 530 530 title={Evolutionary Multi-Armed Bandits with Genetic Thompson Sampling},
year={2022}, 531 531 year={2022},
type={article}, 532 532 type={article},
language={English}, 533 533 language={English},
volume={}, 534 534 volume={},
number={}, 535 535 number={},
pages={1-8}, 536 536 pages={1-8},
doi={10.1109/CEC55065.2022.9870279} 537 537 doi={10.1109/CEC55065.2022.9870279}
} 538 538 }
539 539
@article{Obeid, 540 540 @article{Obeid,
author={Obeid, C. and Lahoud, C. and Khoury, H. E. and Champin, P.}, 541 541 author={Obeid, C. and Lahoud, C. and Khoury, H. E. and Champin, P.},
title={A Novel Hybrid Recommender System Approach for Student Academic Advising Named COHRS, Supported by Case-based Reasoning and Ontology}, 542 542 title={A Novel Hybrid Recommender System Approach for Student Academic Advising Named COHRS, Supported by Case-based Reasoning and Ontology},
journal={Computer Science and Information Systems}, 543 543 journal={Computer Science and Information Systems},
type={article}, 544 544 type={article},
language={English}, 545 545 language={English},
volume={19}, 546 546 volume={19},
number={2}, 547 547 number={2},
pages={979–1005}, 548 548 pages={979–1005},
year={2022}, 549 549 year={2022},
doi={https://doi.org/10.2298/CSIS220215011O} 550 550 doi={https://doi.org/10.2298/CSIS220215011O}
} 551 551 }
552 552
@book{Nkambou, 553 553 @book{Nkambou,
author = {Nkambou, R. and Bourdeau, J. and Mizoguchi, R.}, 554 554 author = {Nkambou, R. and Bourdeau, J. and Mizoguchi, R.},
title = {Advances in Intelligent Tutoring Systems}, 555 555 title = {Advances in Intelligent Tutoring Systems},
year = {2010}, 556 556 year = {2010},
type = {article}, 557 557 type = {article},
language = {English}, 558 558 language = {English},
publisher = {Springer Berlin, Heidelberg}, 559 559 publisher = {Springer Berlin, Heidelberg},
edition = {1} 560 560 edition = {1}
} 561 561 }
562 562
@book{hajduk2019cognitive, 563 563 @book{hajduk2019cognitive,
title={Cognitive Multi-agent Systems: Structures, Strategies and Applications to Mobile Robotics and Robosoccer}, 564 564 title={Cognitive Multi-agent Systems: Structures, Strategies and Applications to Mobile Robotics and Robosoccer},
author={Hajduk, M. and Sukop, M. and Haun, M.}, 565 565 author={Hajduk, M. and Sukop, M. and Haun, M.},
type={book}, 566 566 type={book},
language={English}, 567 567 language={English},
isbn={9783319936857}, 568 568 isbn={9783319936857},
series={Studies in Systems, Decision and Control}, 569 569 series={Studies in Systems, Decision and Control},
year={2019}, 570 570 year={2019},
publisher={Springer International Publishing} 571 571 publisher={Springer International Publishing}
} 572 572 }
573 573
@article{RICHTER20093, 574 574 @article{RICHTER20093,
title = {The search for knowledge, contexts, and Case-Based Reasoning}, 575 575 title = {The search for knowledge, contexts, and Case-Based Reasoning},
journal = {Engineering Applications of Artificial Intelligence}, 576 576 journal = {Engineering Applications of Artificial Intelligence},
language = {English}, 577 577 language = {English},
type = {article}, 578 578 type = {article},
volume = {22}, 579 579 volume = {22},
number = {1}, 580 580 number = {1},
pages = {3-9}, 581 581 pages = {3-9},
year = {2009}, 582 582 year = {2009},
issn = {0952-1976}, 583 583 issn = {0952-1976},
doi = {https://doi.org/10.1016/j.engappai.2008.04.021}, 584 584 doi = {https://doi.org/10.1016/j.engappai.2008.04.021},
url = {https://www.sciencedirect.com/science/article/pii/S095219760800078X}, 585 585 url = {https://www.sciencedirect.com/science/article/pii/S095219760800078X},
author = {Michael M. Richter}, 586 586 author = {Michael M. Richter},
keywords = {Case-Based Reasoning, Knowledge, Processes, Utility, Context}, 587 587 keywords = {Case-Based Reasoning, Knowledge, Processes, Utility, Context},
abstract = {A major goal of this paper is to compare Case-Based Reasoning with other methods searching for knowledge. We consider knowledge as a resource that can be traded. It has no value in itself; the value is measured by the usefulness of applying it in some process. Such a process has info-needs that have to be satisfied. The concept to measure this is the economical term utility. In general, utility depends on the user and its context, i.e., it is subjective. Here, we introduce levels of contexts from general to individual. We illustrate that Case-Based Reasoning on the lower, i.e., more personal levels CBR is quite useful, in particular in comparison with traditional informational retrieval methods.} 588 588 abstract = {A major goal of this paper is to compare Case-Based Reasoning with other methods searching for knowledge. We consider knowledge as a resource that can be traded. It has no value in itself; the value is measured by the usefulness of applying it in some process. Such a process has info-needs that have to be satisfied. The concept to measure this is the economical term utility. In general, utility depends on the user and its context, i.e., it is subjective. Here, we introduce levels of contexts from general to individual. We illustrate that Case-Based Reasoning on the lower, i.e., more personal levels CBR is quite useful, in particular in comparison with traditional informational retrieval methods.}
} 589 589 }
590 590
@Thesis{Marie, 591 591 @Thesis{Marie,
author={Marie, F.}, 592 592 author={Marie, F.},
title={COLISEUM-3D. Une plate-forme innovante pour la segmentation d’images médicales par Raisonnement à Partir de Cas (RàPC) et méthodes d’apprentissage de type Deep Learning}, 593 593 title={COLISEUM-3D. Une plate-forme innovante pour la segmentation d’images médicales par Raisonnement à Partir de Cas (RàPC) et méthodes d’apprentissage de type Deep Learning},
type={diplomathesis}, 594 594 type={diplomathesis},
language={French}, 595 595 language={French},
institution={Université de Franche-Comte}, 596 596 institution={Université de Franche-Comte},
year={2019} 597 597 year={2019}
} 598 598 }
599 599
@book{Hoang, 600 600 @book{Hoang,
title = {La formule du savoir. Une philosophie unifiée du savoir fondée sur le théorème de Bayes}, 601 601 title = {La formule du savoir. Une philosophie unifiée du savoir fondée sur le théorème de Bayes},
author = {Hoang, L.N.}, 602 602 author = {Hoang, L.N.},
type = {book}, 603 603 type = {book},
language = {French}, 604 604 language = {French},
isbn = {9782759822607}, 605 605 isbn = {9782759822607},
year = {2018}, 606 606 year = {2018},
publisher = {EDP Sciences} 607 607 publisher = {EDP Sciences}
} 608 608 }
609 609
@book{Richter2013, 610 610 @book{Richter2013,
title={Case-Based Reasoning (A Textbook)}, 611 611 title={Case-Based Reasoning (A Textbook)},
author={Richter, M. and Weber, R.}, 612 612 author={Richter, M. and Weber, R.},
type={book}, 613 613 type={book},
language={English}, 614 614 language={English},
isbn={9783642401664}, 615 615 isbn={9783642401664},
year={2013}, 616 616 year={2013},
publisher={Springer-Verlag GmbH} 617 617 publisher={Springer-Verlag GmbH}
} 618 618 }
619 619
@book{kedia2020hands, 620 620 @book{kedia2020hands,
title={Hands-On Python Natural Language Processing: Explore tools and techniques to analyze and process text with a view to building real-world NLP applications}, 621 621 title={Hands-On Python Natural Language Processing: Explore tools and techniques to analyze and process text with a view to building real-world NLP applications},
author={Kedia, A. and Rasu, M.}, 622 622 author={Kedia, A. and Rasu, M.},
language={English}, 623 623 language={English},
type={book}, 624 624 type={book},
isbn={9781838982584}, 625 625 isbn={9781838982584},
url={https://books.google.fr/books?id=1AbuDwAAQBAJ}, 626 626 url={https://books.google.fr/books?id=1AbuDwAAQBAJ},
year={2020}, 627 627 year={2020},
publisher={Packt Publishing} 628 628 publisher={Packt Publishing}
} 629 629 }
630 630
@book{ghosh2019natural, 631 631 @book{ghosh2019natural,
title={Natural Language Processing Fundamentals: Build intelligent applications that can interpret the human language to deliver impactful results}, 632 632 title={Natural Language Processing Fundamentals: Build intelligent applications that can interpret the human language to deliver impactful results},
author={Ghosh, S. and Gunning, D.}, 633 633 author={Ghosh, S. and Gunning, D.},
language={English}, 634 634 language={English},
type={book}, 635 635 type={book},
isbn={9781789955989}, 636 636 isbn={9781789955989},
url={https://books.google.fr/books?id=i8-PDwAAQBAJ}, 637 637 url={https://books.google.fr/books?id=i8-PDwAAQBAJ},
year={2019}, 638 638 year={2019},
publisher={Packt Publishing} 639 639 publisher={Packt Publishing}
} 640 640 }
641 641
@article{Akerblom, 642 642 @article{Akerblom,
title={Online learning of network bottlenecks via minimax paths}, 643 643 title={Online learning of network bottlenecks via minimax paths},
author={kerblom, Niklas and Hoseini, Fazeleh Sadat and Haghir Chehreghani, Morteza}, 644 644 author={kerblom, Niklas and Hoseini, Fazeleh Sadat and Haghir Chehreghani, Morteza},
language={English}, 645 645 language={English},
type={article}, 646 646 type={article},
volume = {122}, 647 647 volume = {122},
year = {2023}, 648 648 year = {2023},
issn = {1573-0565}, 649 649 issn = {1573-0565},
doi = {https://doi.org/10.1007/s10994-022-06270-0}, 650 650 doi = {https://doi.org/10.1007/s10994-022-06270-0},
url = {https://doi.org/10.1007/s10994-022-06270-0}, 651 651 url = {https://doi.org/10.1007/s10994-022-06270-0},
abstract={In this paper, we study bottleneck identification in networks via extracting minimax paths. Many real-world networks have stochastic weights for which full knowledge is not available in advance. Therefore, we model this task as a combinatorial semi-bandit problem to which we apply a combinatorial version of Thompson Sampling and establish an upper bound on the corresponding Bayesian regret. Due to the computational intractability of the problem, we then devise an alternative problem formulation which approximates the original objective. Finally, we experimentally evaluate the performance of Thompson Sampling with the approximate formulation on real-world directed and undirected networks.} 652 652 abstract={In this paper, we study bottleneck identification in networks via extracting minimax paths. Many real-world networks have stochastic weights for which full knowledge is not available in advance. Therefore, we model this task as a combinatorial semi-bandit problem to which we apply a combinatorial version of Thompson Sampling and establish an upper bound on the corresponding Bayesian regret. Due to the computational intractability of the problem, we then devise an alternative problem formulation which approximates the original objective. Finally, we experimentally evaluate the performance of Thompson Sampling with the approximate formulation on real-world directed and undirected networks.}
} 653 653 }
654 654
@article{Simen, 655 655 @article{Simen,
title={Dynamic slate recommendation with gated recurrent units and Thompson sampling}, 656 656 title={Dynamic slate recommendation with gated recurrent units and Thompson sampling},
author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo}, 657 657 author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo},
language={English}, 658 658 language={English},
type={article}, 659 659 type={article},
volume = {36}, 660 660 volume = {36},
year = {2022}, 661 661 year = {2022},
issn = {1573-756X}, 662 662 issn = {1573-756X},
doi = {https://doi.org/10.1007/s10618-022-00849-w}, 663 663 doi = {https://doi.org/10.1007/s10618-022-00849-w},
url = {https://doi.org/10.1007/s10618-022-00849-w}, 664 664 url = {https://doi.org/10.1007/s10618-022-00849-w},
abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.} 665 665 abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.}
} 666 666 }
667 667
@Inproceedings{Arthurs, 668 668 @Inproceedings{Arthurs,
author={Arthurs, Noah and Stenhaug, Ben and Karayev, Sergey and Piech, Chris}, 669 669 author={Arthurs, Noah and Stenhaug, Ben and Karayev, Sergey and Piech, Chris},
booktitle={International Conference on Educational Data Mining (EDM)}, 670 670 booktitle={International Conference on Educational Data Mining (EDM)},
title={Grades Are Not Normal: Improving Exam Score Models Using the Logit-Normal Distribution}, 671 671 title={Grades Are Not Normal: Improving Exam Score Models Using the Logit-Normal Distribution},
year={2019}, 672 672 year={2019},
type={article}, 673 673 type={article},
language={English}, 674 674 language={English},
volume={}, 675 675 volume={},
number={}, 676 676 number={},
pages={6}, 677 677 pages={6},
url={https://eric.ed.gov/?id=ED599204} 678 678 url={https://eric.ed.gov/?id=ED599204}
} 679 679 }
680 680
@article{Bahramian, 681 681 @article{Bahramian,
title={A Cold Start Context-Aware Recommender System for Tour Planning Using Artificial Neural Network and Case Based Reasoning}, 682 682 title={A Cold Start Context-Aware Recommender System for Tour Planning Using Artificial Neural Network and Case Based Reasoning},
author={Bahramian, Zahra and Ali Abbaspour, Rahim and Claramunt, Christophe}, 683 683 author={Bahramian, Zahra and Ali Abbaspour, Rahim and Claramunt, Christophe},
language={English}, 684 684 language={English},
type={article}, 685 685 type={article},
year = {2017}, 686 686 year = {2017},
issn = {1574-017X}, 687 687 issn = {1574-017X},
doi = {https://doi.org/10.1155/2017/9364903}, 688 688 doi = {https://doi.org/10.1155/2017/9364903},
url = {https://doi.org/10.1155/2017/9364903}, 689 689 url = {https://doi.org/10.1155/2017/9364903},
abstract={Nowadays, large amounts of tourism information and services are available over the Web. This makes it difficult for the user to search for some specific information such as selecting a tour in a given city as an ordered set of points of interest. Moreover, the user rarely knows all his needs upfront and his preferences may change during a recommendation process. The user may also have a limited number of initial ratings and most often the recommender system is likely to face the well-known cold start problem. The objective of the research presented in this paper is to introduce a hybrid interactive context-aware tourism recommender system that takes into account user’s feedbacks and additional contextual information. It offers personalized tours to the user based on his preferences thanks to the combination of a case based reasoning framework and an artificial neural network. The proposed method has been tried in the city of Tehran in Iran. The results show that the proposed method outperforms current artificial neural network methods and combinations of case based reasoning with <svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" style="vertical-align:-0.2063999pt" id="M1" height="9.49473pt" version="1.1" viewBox="-0.0498162 -9.28833 6.66314 9.49473" width="6.66314pt"><g transform="matrix(.013,0,0,-0.013,0,0)"><path id="g113-108" d="M480 416C480 431 465 448 438 448C388 448 312 383 252 330C217 299 188 273 155 237H153L257 680C262 700 263 712 253 712C240 712 183 684 97 674L92 648L126 647C166 646 172 645 163 606L23 -6L29 -12C51 -5 77 2 107 8C115 62 130 128 142 180C153 193 179 220 204 241C231 170 259 106 288 54C317 0 336 -12 358 -12C381 -12 423 2 477 80L460 100C434 74 408 54 398 54C385 54 374 65 351 107C326 154 282 241 263 299C296 332 351 377 403 377C424 377 436 372 445 368C449 366 456 368 462 375C472 386 480 402 480 416Z"/></g></svg>-nearest neighbor methods in terms of user effort, accuracy, and user satisfaction.} 690 690 abstract={Nowadays, large amounts of tourism information and services are available over the Web. This makes it difficult for the user to search for some specific information such as selecting a tour in a given city as an ordered set of points of interest. Moreover, the user rarely knows all his needs upfront and his preferences may change during a recommendation process. The user may also have a limited number of initial ratings and most often the recommender system is likely to face the well-known cold start problem. The objective of the research presented in this paper is to introduce a hybrid interactive context-aware tourism recommender system that takes into account user’s feedbacks and additional contextual information. It offers personalized tours to the user based on his preferences thanks to the combination of a case based reasoning framework and an artificial neural network. The proposed method has been tried in the city of Tehran in Iran. The results show that the proposed method outperforms current artificial neural network methods and combinations of case based reasoning with <svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" style="vertical-align:-0.2063999pt" id="M1" height="9.49473pt" version="1.1" viewBox="-0.0498162 -9.28833 6.66314 9.49473" width="6.66314pt"><g transform="matrix(.013,0,0,-0.013,0,0)"><path id="g113-108" d="M480 416C480 431 465 448 438 448C388 448 312 383 252 330C217 299 188 273 155 237H153L257 680C262 700 263 712 253 712C240 712 183 684 97 674L92 648L126 647C166 646 172 645 163 606L23 -6L29 -12C51 -5 77 2 107 8C115 62 130 128 142 180C153 193 179 220 204 241C231 170 259 106 288 54C317 0 336 -12 358 -12C381 -12 423 2 477 80L460 100C434 74 408 54 398 54C385 54 374 65 351 107C326 154 282 241 263 299C296 332 351 377 403 377C424 377 436 372 445 368C449 366 456 368 462 375C472 386 480 402 480 416Z"/></g></svg>-nearest neighbor methods in terms of user effort, accuracy, and user satisfaction.}
} 691 691 }
692 692
@Thesis{Daubias2011, 693 693 @Thesis{Daubias2011,
author={Sthéphanie Jean-Daubias}, 694 694 author={Sthéphanie Jean-Daubias},
title={Ingénierie des profils d'apprenants}, 695 695 title={Ingénierie des profils d'apprenants},
type={diplomathesis}, 696 696 type={diplomathesis},
language={French}, 697 697 language={French},
institution={Université Claude Bernard Lyon 1}, 698 698 institution={Université Claude Bernard Lyon 1},
year={2011} 699 699 year={2011}
} 700 700 }
701 701
@article{Tapalova, 702 702 @article{Tapalova,
author = {Olga Tapalova and Nadezhda Zhiyenbayeva}, 703 703 author = {Olga Tapalova and Nadezhda Zhiyenbayeva},
title ={Artificial Intelligence in Education: AIEd for Personalised Learning Pathways}, 704 704 title ={Artificial Intelligence in Education: AIEd for Personalised Learning Pathways},
journal = {Electronic Journal of e-Learning}, 705 705 journal = {Electronic Journal of e-Learning},
volume = {}, 706 706 volume = {},
number = {}, 707 707 number = {},
pages = {15}, 708 708 pages = {15},
year = {2022}, 709 709 year = {2022},
URL = {https://eric.ed.gov/?q=Artificial+Intelligence+in+Education%3a+AIEd+for+Personalised+Learning+Pathways&id=EJ1373006}, 710 710 URL = {https://eric.ed.gov/?q=Artificial+Intelligence+in+Education%3a+AIEd+for+Personalised+Learning+Pathways&id=EJ1373006},
language={English}, 711 711 language={English},
type={article}, 712 712 type={article},
abstract = {Artificial intelligence is the driving force of change focusing on the needs and demands of the student. The research explores Artificial Intelligence in Education (AIEd) for building personalised learning systems for students. The research investigates and proposes a framework for AIEd: social networking sites and chatbots, expert systems for education, intelligent mentors and agents, machine learning, personalised educational systems and virtual educational environments. These technologies help educators to develop and introduce personalised approaches to master new knowledge and develop professional competencies. The research presents a case study of AIEd implementation in education. The scholars conducted the experiment in educational establishments using artificial intelligence in the curriculum. The scholars surveyed 184 second-year students of the Institute of Pedagogy and Psychology at the Abay Kazakh National Pedagogical University and the Kuban State Technological University to collect the data. The scholars considered the collective group discussions regarding the application of artificial intelligence in education to improve the effectiveness of learning. The research identified key advantages to creating personalised learning pathways such as access to training in 24/7 mode, training in virtual contexts, adaptation of educational content to personal needs of students, real-time and regular feedback, improvements in the educational process and mental stimulations. The proposed education paradigm reflects the increasing role of artificial intelligence in socio-economic life, the social and ethical concerns artificial intelligence may pose to humanity and its role in the digitalisation of education. The current article may be used as a theoretical framework for many educational institutions planning to exploit the capabilities of artificial intelligence in their adaptation to personalized learning.} 713 713 abstract = {Artificial intelligence is the driving force of change focusing on the needs and demands of the student. The research explores Artificial Intelligence in Education (AIEd) for building personalised learning systems for students. The research investigates and proposes a framework for AIEd: social networking sites and chatbots, expert systems for education, intelligent mentors and agents, machine learning, personalised educational systems and virtual educational environments. These technologies help educators to develop and introduce personalised approaches to master new knowledge and develop professional competencies. The research presents a case study of AIEd implementation in education. The scholars conducted the experiment in educational establishments using artificial intelligence in the curriculum. The scholars surveyed 184 second-year students of the Institute of Pedagogy and Psychology at the Abay Kazakh National Pedagogical University and the Kuban State Technological University to collect the data. The scholars considered the collective group discussions regarding the application of artificial intelligence in education to improve the effectiveness of learning. The research identified key advantages to creating personalised learning pathways such as access to training in 24/7 mode, training in virtual contexts, adaptation of educational content to personal needs of students, real-time and regular feedback, improvements in the educational process and mental stimulations. The proposed education paradigm reflects the increasing role of artificial intelligence in socio-economic life, the social and ethical concerns artificial intelligence may pose to humanity and its role in the digitalisation of education. The current article may be used as a theoretical framework for many educational institutions planning to exploit the capabilities of artificial intelligence in their adaptation to personalized learning.}
} 714 714 }
715 715
@article{Auer, 716 716 @article{Auer,
title = {From monolithic systems to Microservices: An assessment framework}, 717 717 title = {From monolithic systems to Microservices: An assessment framework},
journal = {Information and Software Technology}, 718 718 journal = {Information and Software Technology},
volume = {137}, 719 719 volume = {137},
pages = {106600}, 720 720 pages = {106600},
year = {2021}, 721 721 year = {2021},
issn = {0950-5849}, 722 722 issn = {0950-5849},
doi = {https://doi.org/10.1016/j.infsof.2021.106600}, 723 723 doi = {https://doi.org/10.1016/j.infsof.2021.106600},
url = {https://www.sciencedirect.com/science/article/pii/S0950584921000793}, 724 724 url = {https://www.sciencedirect.com/science/article/pii/S0950584921000793},
author = {Florian Auer and Valentina Lenarduzzi and Michael Felderer and Davide Taibi}, 725 725 author = {Florian Auer and Valentina Lenarduzzi and Michael Felderer and Davide Taibi},
keywords = {Microservices, Cloud migration, Software measurement}, 726 726 keywords = {Microservices, Cloud migration, Software measurement},
abstract = {Context: 727 727 abstract = {Context:
Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. 728 728 Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings.
Objective: 729 729 Objective:
The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. 730 730 The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system.
Method: 731 731 Method:
We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. 732 732 We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory.
Results: 733 733 Results:
We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information.} 734 734 We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information.}
} 735 735 }
736 736
@Article{jmse10040464, 737 737 @Article{jmse10040464,
AUTHOR = {Zuluaga, Carlos A. and Aristizábal, Luis M. and Rúa, Santiago and Franco, Diego A. and Osorio, Dorie A. and Vásquez, Rafael E.}, 738 738 AUTHOR = {Zuluaga, Carlos A. and Aristizábal, Luis M. and Rúa, Santiago and Franco, Diego A. and Osorio, Dorie A. and Vásquez, Rafael E.},
TITLE = {Development of a Modular Software Architecture for Underwater Vehicles Using Systems Engineering}, 739 739 TITLE = {Development of a Modular Software Architecture for Underwater Vehicles Using Systems Engineering},
JOURNAL = {Journal of Marine Science and Engineering}, 740 740 JOURNAL = {Journal of Marine Science and Engineering},
VOLUME = {10}, 741 741 VOLUME = {10},
YEAR = {2022}, 742 742 YEAR = {2022},
NUMBER = {4}, 743 743 NUMBER = {4},
ARTICLE-NUMBER = {464}, 744 744 ARTICLE-NUMBER = {464},
URL = {https://www.mdpi.com/2077-1312/10/4/464}, 745 745 URL = {https://www.mdpi.com/2077-1312/10/4/464},
ISSN = {2077-1312}, 746 746 ISSN = {2077-1312},
ABSTRACT = {This paper addresses the development of a modular software architecture for the design/construction/operation of a remotely operated vehicle (ROV), based on systems engineering. First, systems engineering and the Vee model are presented with the objective of defining the interactions of the stakeholders with the software architecture development team and establishing the baselines that must be met in each development phase. In the development stage, the definition of the architecture and its connection with the hardware is presented, taking into account the use of the actor model, which represents the high-level software architecture used to solve concurrency problems. Subsequently, the structure of the classes is defined both at high and low levels in the instruments using the object-oriented programming paradigm. Finally, unit tests are developed for each component in the software architecture, quality assessment tests are implemented for system functions fulfillment, and a field sea trial for testing different modules of the vehicle is described. This approach is well suited for the development of complex systems such as marine vehicles and those systems which require scalability and modularity to add functionalities.}, 747 747 ABSTRACT = {This paper addresses the development of a modular software architecture for the design/construction/operation of a remotely operated vehicle (ROV), based on systems engineering. First, systems engineering and the Vee model are presented with the objective of defining the interactions of the stakeholders with the software architecture development team and establishing the baselines that must be met in each development phase. In the development stage, the definition of the architecture and its connection with the hardware is presented, taking into account the use of the actor model, which represents the high-level software architecture used to solve concurrency problems. Subsequently, the structure of the classes is defined both at high and low levels in the instruments using the object-oriented programming paradigm. Finally, unit tests are developed for each component in the software architecture, quality assessment tests are implemented for system functions fulfillment, and a field sea trial for testing different modules of the vehicle is described. This approach is well suited for the development of complex systems such as marine vehicles and those systems which require scalability and modularity to add functionalities.},
DOI = {10.3390/jmse10040464} 748 748 DOI = {10.3390/jmse10040464}
} 749 749 }
750 750
@article{doi:10.1177/1754337116651013, 751 751 @article{doi:10.1177/1754337116651013,
author = {Julien Henriet and Lang Christophe and Philippe Laurent}, 752 752 author = {Julien Henriet and Lang Christophe and Philippe Laurent},
title ={Artificial Intelligence-Virtual Trainer: An educative system based on artificial intelligence and designed to produce varied and consistent training lessons}, 753 753 title ={Artificial Intelligence-Virtual Trainer: An educative system based on artificial intelligence and designed to produce varied and consistent training lessons},
journal = {Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology}, 754 754 journal = {Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology},
volume = {231}, 755 755 volume = {231},
number = {2}, 756 756 number = {2},
pages = {110-124}, 757 757 pages = {110-124},
year = {2017}, 758 758 year = {2017},
doi = {10.1177/1754337116651013}, 759 759 doi = {10.1177/1754337116651013},
URL = {https://doi.org/10.1177/1754337116651013}, 760 760 URL = {https://doi.org/10.1177/1754337116651013},
eprint = {https://doi.org/10.1177/1754337116651013}, 761 761 eprint = {https://doi.org/10.1177/1754337116651013},
abstract = { AI-Virtual Trainer is an educative system using Artificial Intelligence to propose varied lessons to trainers. The agents of this multi-agent system apply case-based reasoning to build solutions by analogy. However, as required by the field, Artificial Intelligence-Virtual Trainer never proposes the same lesson twice, whereas the same objective may be set many times consecutively. The adaptation process of Artificial Intelligence-Virtual Trainer delivers an ordered set of exercises adapted to the objectives and sub-objectives chosen by trainers. This process has been enriched by including the notion of distance between exercises: the proposed tasks are not only appropriate but are hierarchically ordered. With this new version of the system, students are guided towards their objectives via an underlying theme. Finally, the agents responsible for the different parts of lessons collaborate with each other according to a dedicated protocol and decision-making policy since no exercise must appear more than once in the same lesson. The results prove that Artificial Intelligence-Virtual Trainer, however perfectible, meets the requirements of this field. } 762 762 abstract = { AI-Virtual Trainer is an educative system using Artificial Intelligence to propose varied lessons to trainers. The agents of this multi-agent system apply case-based reasoning to build solutions by analogy. However, as required by the field, Artificial Intelligence-Virtual Trainer never proposes the same lesson twice, whereas the same objective may be set many times consecutively. The adaptation process of Artificial Intelligence-Virtual Trainer delivers an ordered set of exercises adapted to the objectives and sub-objectives chosen by trainers. This process has been enriched by including the notion of distance between exercises: the proposed tasks are not only appropriate but are hierarchically ordered. With this new version of the system, students are guided towards their objectives via an underlying theme. Finally, the agents responsible for the different parts of lessons collaborate with each other according to a dedicated protocol and decision-making policy since no exercise must appear more than once in the same lesson. The results prove that Artificial Intelligence-Virtual Trainer, however perfectible, meets the requirements of this field. }
} 763 763 }
764 764
@InProceedings{10.1007/978-3-030-01081-2_9, 765 765 @InProceedings{10.1007/978-3-030-01081-2_9,
author="Henriet, Julien 766 766 author="Henriet, Julien
and Greffier, Fran{\c{c}}oise", 767 767 and Greffier, Fran{\c{c}}oise",
editor="Cox, Michael T. 768 768 editor="Cox, Michael T.
and Funk, Peter 769 769 and Funk, Peter
and Begum, Shahina", 770 770 and Begum, Shahina",
title="AI-VT: An Example of CBR that Generates a Variety of Solutions to the Same Problem", 771 771 title="AI-VT: An Example of CBR that Generates a Variety of Solutions to the Same Problem",
booktitle="Case-Based Reasoning Research and Development", 772 772 booktitle="Case-Based Reasoning Research and Development",
year="2018", 773 773 year="2018",
publisher="Springer International Publishing", 774 774 publisher="Springer International Publishing",
address="Cham", 775 775 address="Cham",
pages="124--139", 776 776 pages="124--139",
abstract="AI-Virtual Trainer (AI-VT) is an intelligent tutoring system based on case-based reasoning. AI-VT has been designed to generate personalised, varied, and consistent training sessions for learners. The AI-VT training sessions propose different exercises in regard to a capacity associated with sub-capacities. For example, in the field of training for algorithms, a capacity could be ``Use a control structure alternative'' and an associated sub-capacity could be ``Write a boolean condition''. AI-VT can elaborate a personalised list of exercises for each learner. One of the main requirements and challenges studied in this work is its ability to propose varied training sessions to the same learner for many weeks, which constitutes the challenge studied in our work. Indeed, if the same set of exercises is proposed time after time to learners, they will stop paying attention and lose motivation. Thus, even if the generation of training sessions is based on analogy and must integrate the repetition of some exercises, it also must introduce some diversity and AI-VT must deal with this diversity. In this paper, we have highlighted the fact that the retaining (or capitalisation) phase of CBR is of the utmost importance for diversity, and we have also highlighted that the equilibrium between repetition and variety depends on the abilities learned. This balance has an important impact on the retaining phase of AI-VT.", 777 777 abstract="AI-Virtual Trainer (AI-VT) is an intelligent tutoring system based on case-based reasoning. AI-VT has been designed to generate personalised, varied, and consistent training sessions for learners. The AI-VT training sessions propose different exercises in regard to a capacity associated with sub-capacities. For example, in the field of training for algorithms, a capacity could be ``Use a control structure alternative'' and an associated sub-capacity could be ``Write a boolean condition''. AI-VT can elaborate a personalised list of exercises for each learner. One of the main requirements and challenges studied in this work is its ability to propose varied training sessions to the same learner for many weeks, which constitutes the challenge studied in our work. Indeed, if the same set of exercises is proposed time after time to learners, they will stop paying attention and lose motivation. Thus, even if the generation of training sessions is based on analogy and must integrate the repetition of some exercises, it also must introduce some diversity and AI-VT must deal with this diversity. In this paper, we have highlighted the fact that the retaining (or capitalisation) phase of CBR is of the utmost importance for diversity, and we have also highlighted that the equilibrium between repetition and variety depends on the abilities learned. This balance has an important impact on the retaining phase of AI-VT.",
isbn="978-3-030-01081-2" 778 778 isbn="978-3-030-01081-2"
} 779 779 }
780 780
@article{BAKUROV2021100913, 781 781 @article{BAKUROV2021100913,
title = {Genetic programming for stacked generalization}, 782 782 title = {Genetic programming for stacked generalization},
journal = {Swarm and Evolutionary Computation}, 783 783 journal = {Swarm and Evolutionary Computation},
volume = {65}, 784 784 volume = {65},
pages = {100913}, 785 785 pages = {100913},
year = {2021}, 786 786 year = {2021},
issn = {2210-6502}, 787 787 issn = {2210-6502},
doi = {https://doi.org/10.1016/j.swevo.2021.100913}, 788 788 doi = {https://doi.org/10.1016/j.swevo.2021.100913},
url = {https://www.sciencedirect.com/science/article/pii/S2210650221000742}, 789 789 url = {https://www.sciencedirect.com/science/article/pii/S2210650221000742},
author = {Illya Bakurov and Mauro Castelli and Olivier Gau and Francesco Fontanella and Leonardo Vanneschi}, 790 790 author = {Illya Bakurov and Mauro Castelli and Olivier Gau and Francesco Fontanella and Leonardo Vanneschi},
keywords = {Genetic Programming, Stacking, Ensemble Learning, Stacked Generalization}, 791 791 keywords = {Genetic Programming, Stacking, Ensemble Learning, Stacked Generalization},
abstract = {In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.} 792 792 abstract = {In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.}
} 793 793 }
794 794
@article{Liang, 795 795 @article{Liang,
author={Liang Mang and Chang Tianpeng and An Bingxing and Duan Xinghai and Du Lili and Wang Xiaoqiao and Miao Jian and Xu Lingyang and Gao Xue and Zhang Lupei and Li Junya and Gao Huijiang}, 796 796 author={Liang Mang and Chang Tianpeng and An Bingxing and Duan Xinghai and Du Lili and Wang Xiaoqiao and Miao Jian and Xu Lingyang and Gao Xue and Zhang Lupei and Li Junya and Gao Huijiang},
Title={A Stacking Ensemble Learning Framework for Genomic Prediction}, 797 797 Title={A Stacking Ensemble Learning Framework for Genomic Prediction},
Journal={Frontiers in Genetics}, 798 798 Journal={Frontiers in Genetics},
year={2021}, 799 799 year={2021},
doi ={10.3389/fgene.2021.600040}, 800 800 doi ={10.3389/fgene.2021.600040},
PMID={33747037}, 801 801 PMID={33747037},
PMCID={PMC7969712} 802 802 PMCID={PMC7969712}
} 803 803 }
804 804
@Article{cmc.2023.033417, 805 805 @Article{cmc.2023.033417,
AUTHOR = {Jeonghoon Choi and Dongjun Suh and Marc-Oliver Otto}, 806 806 AUTHOR = {Jeonghoon Choi and Dongjun Suh and Marc-Oliver Otto},
TITLE = {Boosted Stacking Ensemble Machine Learning Method for Wafer Map Pattern Classification}, 807 807 TITLE = {Boosted Stacking Ensemble Machine Learning Method for Wafer Map Pattern Classification},
JOURNAL = {Computers, Materials \& Continua}, 808 808 JOURNAL = {Computers, Materials \& Continua},
VOLUME = {74}, 809 809 VOLUME = {74},
YEAR = {2023}, 810 810 YEAR = {2023},
NUMBER = {2}, 811 811 NUMBER = {2},
PAGES = {2945--2966}, 812 812 PAGES = {2945--2966},
URL = {http://www.techscience.com/cmc/v74n2/50296}, 813 813 URL = {http://www.techscience.com/cmc/v74n2/50296},
ISSN = {1546-2226}, 814 814 ISSN = {1546-2226},
ABSTRACT = {Recently, machine learning-based technologies have been developed to automate the classification of wafer map defect patterns during semiconductor manufacturing. The existing approaches used in the wafer map pattern classification include directly learning the image through a convolution neural network and applying the ensemble method after extracting image features. This study aims to classify wafer map defects more effectively and derive robust algorithms even for datasets with insufficient defect patterns. First, the number of defects during the actual process may be limited. Therefore, insufficient data are generated using convolutional auto-encoder (CAE), and the expanded data are verified using the evaluation technique of structural similarity index measure (SSIM). After extracting handcrafted features, a boosted stacking ensemble model that integrates the four base-level classifiers with the extreme gradient boosting classifier as a meta-level classifier is designed and built for training the model based on the expanded data for final prediction. Since the proposed algorithm shows better performance than those of existing ensemble classifiers even for insufficient defect patterns, the results of this study will contribute to improving the product quality and yield of the actual semiconductor manufacturing process.}, 815 815 ABSTRACT = {Recently, machine learning-based technologies have been developed to automate the classification of wafer map defect patterns during semiconductor manufacturing. The existing approaches used in the wafer map pattern classification include directly learning the image through a convolution neural network and applying the ensemble method after extracting image features. This study aims to classify wafer map defects more effectively and derive robust algorithms even for datasets with insufficient defect patterns. First, the number of defects during the actual process may be limited. Therefore, insufficient data are generated using convolutional auto-encoder (CAE), and the expanded data are verified using the evaluation technique of structural similarity index measure (SSIM). After extracting handcrafted features, a boosted stacking ensemble model that integrates the four base-level classifiers with the extreme gradient boosting classifier as a meta-level classifier is designed and built for training the model based on the expanded data for final prediction. Since the proposed algorithm shows better performance than those of existing ensemble classifiers even for insufficient defect patterns, the results of this study will contribute to improving the product quality and yield of the actual semiconductor manufacturing process.},
DOI = {10.32604/cmc.2023.033417} 816 816 DOI = {10.32604/cmc.2023.033417}
} 817 817 }
818 818
@ARTICLE{10.3389/fgene.2021.600040, 819 819 @ARTICLE{10.3389/fgene.2021.600040,
AUTHOR={Liang, Mang and Chang, Tianpeng and An, Bingxing and Duan, Xinghai and Du, Lili and Wang, Xiaoqiao and Miao, Jian and Xu, Lingyang and Gao, Xue and Zhang, Lupei and Li, Junya and Gao, Huijiang}, 820 820 AUTHOR={Liang, Mang and Chang, Tianpeng and An, Bingxing and Duan, Xinghai and Du, Lili and Wang, Xiaoqiao and Miao, Jian and Xu, Lingyang and Gao, Xue and Zhang, Lupei and Li, Junya and Gao, Huijiang},
TITLE={A Stacking Ensemble Learning Framework for Genomic Prediction}, 821 821 TITLE={A Stacking Ensemble Learning Framework for Genomic Prediction},
JOURNAL={Frontiers in Genetics}, 822 822 JOURNAL={Frontiers in Genetics},
VOLUME={12}, 823 823 VOLUME={12},
YEAR={2021}, 824 824 YEAR={2021},
URL={https://www.frontiersin.org/articles/10.3389/fgene.2021.600040}, 825 825 URL={https://www.frontiersin.org/articles/10.3389/fgene.2021.600040},
DOI={10.3389/fgene.2021.600040}, 826 826 DOI={10.3389/fgene.2021.600040},
ISSN={1664-8021}, 827 827 ISSN={1664-8021},
ABSTRACT={Machine learning (ML) is perhaps the most useful tool for the interpretation of large genomic datasets. However, the performance of a single machine learning method in genomic selection (GS) is currently unsatisfactory. To improve the genomic predictions, we constructed a stacking ensemble learning framework (SELF), integrating three machine learning methods, to predict genomic estimated breeding values (GEBVs). The present study evaluated the prediction ability of SELF by analyzing three real datasets, with different genetic architecture; comparing the prediction accuracy of SELF, base learners, genomic best linear unbiased prediction (GBLUP) and BayesB. For each trait, SELF performed better than base learners, which included support vector regression (SVR), kernel ridge regression (KRR) and elastic net (ENET). The prediction accuracy of SELF was, on average, 7.70% higher than GBLUP in three datasets. Except for the milk fat percentage (MFP) traits, of the German Holstein dairy cattle dataset, SELF was more robust than BayesB in all remaining traits. Therefore, we believed that SEFL has the potential to be promoted to estimate GEBVs in other animals and plants.} 828 828 ABSTRACT={Machine learning (ML) is perhaps the most useful tool for the interpretation of large genomic datasets. However, the performance of a single machine learning method in genomic selection (GS) is currently unsatisfactory. To improve the genomic predictions, we constructed a stacking ensemble learning framework (SELF), integrating three machine learning methods, to predict genomic estimated breeding values (GEBVs). The present study evaluated the prediction ability of SELF by analyzing three real datasets, with different genetic architecture; comparing the prediction accuracy of SELF, base learners, genomic best linear unbiased prediction (GBLUP) and BayesB. For each trait, SELF performed better than base learners, which included support vector regression (SVR), kernel ridge regression (KRR) and elastic net (ENET). The prediction accuracy of SELF was, on average, 7.70% higher than GBLUP in three datasets. Except for the milk fat percentage (MFP) traits, of the German Holstein dairy cattle dataset, SELF was more robust than BayesB in all remaining traits. Therefore, we believed that SEFL has the potential to be promoted to estimate GEBVs in other animals and plants.}
} 829 829 }
830 830
@article{DIDDEN2023338, 831 831 @article{DIDDEN2023338,
title = {Decentralized learning multi-agent system for online machine shop scheduling problem}, 832 832 title = {Decentralized learning multi-agent system for online machine shop scheduling problem},
journal = {Journal of Manufacturing Systems}, 833 833 journal = {Journal of Manufacturing Systems},
volume = {67}, 834 834 volume = {67},
pages = {338-360}, 835 835 pages = {338-360},
year = {2023}, 836 836 year = {2023},
issn = {0278-6125}, 837 837 issn = {0278-6125},
doi = {https://doi.org/10.1016/j.jmsy.2023.02.004}, 838 838 doi = {https://doi.org/10.1016/j.jmsy.2023.02.004},
url = {https://www.sciencedirect.com/science/article/pii/S0278612523000286}, 839 839 url = {https://www.sciencedirect.com/science/article/pii/S0278612523000286},
author = {Jeroen B.H.C. Didden and Quang-Vinh Dang and Ivo J.B.F. Adan}, 840 840 author = {Jeroen B.H.C. Didden and Quang-Vinh Dang and Ivo J.B.F. Adan},
keywords = {Multi-agent system, Decentralized systems, Learning algorithm, Industry 4.0, Smart manufacturing}, 841 841 keywords = {Multi-agent system, Decentralized systems, Learning algorithm, Industry 4.0, Smart manufacturing},
abstract = {Customer profiles have rapidly changed over the past few years, with products being requested with more customization and with lower demand. In addition to the advances in technologies owing to Industry 4.0, manufacturers explore autonomous and smart factories. This paper proposes a decentralized multi-agent system (MAS), including intelligent agents that can respond to their environment autonomously through learning capabilities, to cope with an online machine shop scheduling problem. In the proposed system, agents participate in auctions to receive jobs to process, learn how to bid for jobs correctly, and decide when to start processing a job. The objective is to minimize the mean weighted tardiness of all jobs. In contrast to the existing literature, the proposed MAS is assessed on its learning capabilities, producing novel insights concerning what is relevant for learning, when re-learning is needed, and system response to dynamic events (such as rush jobs, increase in processing time, and machine unavailability). Computational experiments also reveal the outperformance of the proposed MAS to other multi-agent systems by at least 25% and common dispatching rules in mean weighted tardiness, as well as other performance measures.} 842 842 abstract = {Customer profiles have rapidly changed over the past few years, with products being requested with more customization and with lower demand. In addition to the advances in technologies owing to Industry 4.0, manufacturers explore autonomous and smart factories. This paper proposes a decentralized multi-agent system (MAS), including intelligent agents that can respond to their environment autonomously through learning capabilities, to cope with an online machine shop scheduling problem. In the proposed system, agents participate in auctions to receive jobs to process, learn how to bid for jobs correctly, and decide when to start processing a job. The objective is to minimize the mean weighted tardiness of all jobs. In contrast to the existing literature, the proposed MAS is assessed on its learning capabilities, producing novel insights concerning what is relevant for learning, when re-learning is needed, and system response to dynamic events (such as rush jobs, increase in processing time, and machine unavailability). Computational experiments also reveal the outperformance of the proposed MAS to other multi-agent systems by at least 25% and common dispatching rules in mean weighted tardiness, as well as other performance measures.}
} 843 843 }
844 844
@article{REZAEI20221, 845 845 @article{REZAEI20221,
title = {A Biased Inferential Naivety learning model for a network of agents}, 846 846 title = {A Biased Inferential Naivety learning model for a network of agents},
journal = {Cognitive Systems Research}, 847 847 journal = {Cognitive Systems Research},
volume = {76}, 848 848 volume = {76},
pages = {1-12}, 849 849 pages = {1-12},
year = {2022}, 850 850 year = {2022},
issn = {1389-0417}, 851 851 issn = {1389-0417},
doi = {https://doi.org/10.1016/j.cogsys.2022.07.001}, 852 852 doi = {https://doi.org/10.1016/j.cogsys.2022.07.001},
url = {https://www.sciencedirect.com/science/article/pii/S1389041722000298}, 853 853 url = {https://www.sciencedirect.com/science/article/pii/S1389041722000298},
author = {Zeinab Rezaei and Saeed Setayeshi and Ebrahim Mahdipour}, 854 854 author = {Zeinab Rezaei and Saeed Setayeshi and Ebrahim Mahdipour},
keywords = {Bayesian decision making, Heuristic method, Inferential naivety assumption, Observational learning, Social learning}, 855 855 keywords = {Bayesian decision making, Heuristic method, Inferential naivety assumption, Observational learning, Social learning},
abstract = {We propose a Biased Inferential Naivety social learning model. In this model, a group of agents tries to determine the true state of the world and make the best possible decisions. The agents have limited computational abilities. They receive noisy private signals about the true state and observe the history of their neighbors' decisions. The proposed model is rooted in the Bayesian method but avoids the complexity of fully Bayesian inference. In our model, the role of knowledge obtained from social observations is separated from the knowledge obtained from private observations. Therefore, the Bayesian inferences on social observations are approximated using inferential naivety assumption, while purely Bayesian inferences are made on private observations. The reduction of herd behavior is another innovation of the proposed model. This advantage is achieved by reducing the effect of social observations on agents' beliefs over time. Therefore, all the agents learn the truth, and the correct consensus is achieved effectively. In this model, using two cognitive biases, there is heterogeneity in agents' behaviors. Therefore, the growth of beliefs and the learning speed can be improved in different situations. Several Monte Carlo simulations confirm the features of the proposed model. The conditions under which the proposed model leads to asymptotic learning are proved.} 856 856 abstract = {We propose a Biased Inferential Naivety social learning model. In this model, a group of agents tries to determine the true state of the world and make the best possible decisions. The agents have limited computational abilities. They receive noisy private signals about the true state and observe the history of their neighbors' decisions. The proposed model is rooted in the Bayesian method but avoids the complexity of fully Bayesian inference. In our model, the role of knowledge obtained from social observations is separated from the knowledge obtained from private observations. Therefore, the Bayesian inferences on social observations are approximated using inferential naivety assumption, while purely Bayesian inferences are made on private observations. The reduction of herd behavior is another innovation of the proposed model. This advantage is achieved by reducing the effect of social observations on agents' beliefs over time. Therefore, all the agents learn the truth, and the correct consensus is achieved effectively. In this model, using two cognitive biases, there is heterogeneity in agents' behaviors. Therefore, the growth of beliefs and the learning speed can be improved in different situations. Several Monte Carlo simulations confirm the features of the proposed model. The conditions under which the proposed model leads to asymptotic learning are proved.}
} 857 857 }
858 858
@article{KAMALI2023110242, 859 859 @article{KAMALI2023110242,
title = {An immune inspired multi-agent system for dynamic multi-objective optimization}, 860 860 title = {An immune inspired multi-agent system for dynamic multi-objective optimization},
journal = {Knowledge-Based Systems}, 861 861 journal = {Knowledge-Based Systems},
volume = {262}, 862 862 volume = {262},
pages = {110242}, 863 863 pages = {110242},
year = {2023}, 864 864 year = {2023},
issn = {0950-7051}, 865 865 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2022.110242}, 866 866 doi = {https://doi.org/10.1016/j.knosys.2022.110242},
url = {https://www.sciencedirect.com/science/article/pii/S0950705122013387}, 867 867 url = {https://www.sciencedirect.com/science/article/pii/S0950705122013387},
author = {Seyed Ruhollah Kamali and Touraj Banirostam and Homayun Motameni and Mohammad Teshnehlab}, 868 868 author = {Seyed Ruhollah Kamali and Touraj Banirostam and Homayun Motameni and Mohammad Teshnehlab},
keywords = {Immune inspired multi-agent system, Dynamic multi-objective optimization, Severe and frequent changes}, 869 869 keywords = {Immune inspired multi-agent system, Dynamic multi-objective optimization, Severe and frequent changes},
abstract = {In this research, an immune inspired multi-agent system (IMAS) is proposed to solve optimization problems in dynamic and multi-objective environments. The proposed IMAS uses artificial immune system metaphors to shape the local behaviors of agents to detect environmental changes, generate Pareto optimal solutions, and react to the dynamics of the problem environment. Apart from that, agents enhance their adaptive capacity in dealing with environmental changes to find the global optimum, with a hierarchical structure without any central control. This study used a combination of diversity-, multi-population- and memory-based approaches to perform better in multi-objective environments with severe and frequent changes. The proposed IMAS is compared with six state-of-the-art algorithms on various benchmark problems. The results indicate its superiority in many of the experiments.} 870 870 abstract = {In this research, an immune inspired multi-agent system (IMAS) is proposed to solve optimization problems in dynamic and multi-objective environments. The proposed IMAS uses artificial immune system metaphors to shape the local behaviors of agents to detect environmental changes, generate Pareto optimal solutions, and react to the dynamics of the problem environment. Apart from that, agents enhance their adaptive capacity in dealing with environmental changes to find the global optimum, with a hierarchical structure without any central control. This study used a combination of diversity-, multi-population- and memory-based approaches to perform better in multi-objective environments with severe and frequent changes. The proposed IMAS is compared with six state-of-the-art algorithms on various benchmark problems. The results indicate its superiority in many of the experiments.}
} 871 871 }
872 872
@article{ZHANG2023110564, 873 873 @article{ZHANG2023110564,
title = {A novel human learning optimization algorithm with Bayesian inference learning}, 874 874 title = {A novel human learning optimization algorithm with Bayesian inference learning},
journal = {Knowledge-Based Systems}, 875 875 journal = {Knowledge-Based Systems},
volume = {271}, 876 876 volume = {271},
pages = {110564}, 877 877 pages = {110564},
year = {2023}, 878 878 year = {2023},
issn = {0950-7051}, 879 879 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2023.110564}, 880 880 doi = {https://doi.org/10.1016/j.knosys.2023.110564},
url = {https://www.sciencedirect.com/science/article/pii/S0950705123003143}, 881 881 url = {https://www.sciencedirect.com/science/article/pii/S0950705123003143},
author = {Pinggai Zhang and Ling Wang and Zixiang Fei and Lisheng Wei and Minrui Fei and Muhammad Ilyas Menhas}, 882 882 author = {Pinggai Zhang and Ling Wang and Zixiang Fei and Lisheng Wei and Minrui Fei and Muhammad Ilyas Menhas},
keywords = {Human learning optimization, Meta-heuristic, Bayesian inference, Bayesian inference learning, Individual learning, Social learning}, 883 883 keywords = {Human learning optimization, Meta-heuristic, Bayesian inference, Bayesian inference learning, Individual learning, Social learning},
abstract = {Humans perform Bayesian inference in a wide variety of tasks, which can help people make selection decisions effectively and therefore enhances learning efficiency and accuracy. Inspired by this fact, this paper presents a novel human learning optimization algorithm with Bayesian inference learning (HLOBIL), in which a Bayesian inference learning operator (BILO) is developed to utilize the inference strategy for enhancing learning efficiency. The in-depth analysis shows that the proposed BILO can efficiently improve the exploitation ability of the algorithm as it can achieve the optimal values and retrieve the optimal information with the accumulated search information. Besides, the exploration ability of HLOBIL is also strengthened by the inborn characteristics of Bayesian inference. The experimental results demonstrate that the developed HLOBIL is superior to previous HLO variants and other state-of-art algorithms with its improved exploitation and exploration abilities.} 884 884 abstract = {Humans perform Bayesian inference in a wide variety of tasks, which can help people make selection decisions effectively and therefore enhances learning efficiency and accuracy. Inspired by this fact, this paper presents a novel human learning optimization algorithm with Bayesian inference learning (HLOBIL), in which a Bayesian inference learning operator (BILO) is developed to utilize the inference strategy for enhancing learning efficiency. The in-depth analysis shows that the proposed BILO can efficiently improve the exploitation ability of the algorithm as it can achieve the optimal values and retrieve the optimal information with the accumulated search information. Besides, the exploration ability of HLOBIL is also strengthened by the inborn characteristics of Bayesian inference. The experimental results demonstrate that the developed HLOBIL is superior to previous HLO variants and other state-of-art algorithms with its improved exploitation and exploration abilities.}
} 885 885 }
886 886
@article{HIPOLITO2023103510, 887 887 @article{HIPOLITO2023103510,
title = {Breaking boundaries: The Bayesian Brain Hypothesis for perception and prediction}, 888 888 title = {Breaking boundaries: The Bayesian Brain Hypothesis for perception and prediction},
journal = {Consciousness and Cognition}, 889 889 journal = {Consciousness and Cognition},
volume = {111}, 890 890 volume = {111},
pages = {103510}, 891 891 pages = {103510},
year = {2023}, 892 892 year = {2023},
issn = {1053-8100}, 893 893 issn = {1053-8100},
doi = {https://doi.org/10.1016/j.concog.2023.103510}, 894 894 doi = {https://doi.org/10.1016/j.concog.2023.103510},
url = {https://www.sciencedirect.com/science/article/pii/S1053810023000478}, 895 895 url = {https://www.sciencedirect.com/science/article/pii/S1053810023000478},
author = {Inês Hipólito and Michael Kirchhoff}, 896 896 author = {Inês Hipólito and Michael Kirchhoff},
keywords = {Bayesian Brain Hypothesis, Modularity of the Mind, Cognitive processes, Informational boundaries}, 897 897 keywords = {Bayesian Brain Hypothesis, Modularity of the Mind, Cognitive processes, Informational boundaries},
abstract = {This special issue aims to provide a comprehensive overview of the current state of the Bayesian Brain Hypothesis and its standing across neuroscience, cognitive science and the philosophy of cognitive science. By gathering cutting-edge research from leading experts, this issue seeks to showcase the latest advancements in our understanding of the Bayesian brain, as well as its potential implications for future research in perception, cognition, and motor control. A special focus to achieve this aim is adopted in this special issue, as it seeks to explore the relation between two seemingly incompatible frameworks for the understanding of cognitive structure and function: the Bayesian Brain Hypothesis and the Modularity Theory of the Mind. In assessing the compatibility between these theories, the contributors to this special issue open up new pathways of thinking and advance our understanding of cognitive processes.} 898 898 abstract = {This special issue aims to provide a comprehensive overview of the current state of the Bayesian Brain Hypothesis and its standing across neuroscience, cognitive science and the philosophy of cognitive science. By gathering cutting-edge research from leading experts, this issue seeks to showcase the latest advancements in our understanding of the Bayesian brain, as well as its potential implications for future research in perception, cognition, and motor control. A special focus to achieve this aim is adopted in this special issue, as it seeks to explore the relation between two seemingly incompatible frameworks for the understanding of cognitive structure and function: the Bayesian Brain Hypothesis and the Modularity Theory of the Mind. In assessing the compatibility between these theories, the contributors to this special issue open up new pathways of thinking and advance our understanding of cognitive processes.}
} 899 899 }
900 900
@article{LI2023424, 901 901 @article{LI2023424,
title = {Multi-agent evolution reinforcement learning method for machining parameters optimization based on bootstrap aggregating graph attention network simulated environment}, 902 902 title = {Multi-agent evolution reinforcement learning method for machining parameters optimization based on bootstrap aggregating graph attention network simulated environment},
journal = {Journal of Manufacturing Systems}, 903 903 journal = {Journal of Manufacturing Systems},
volume = {67}, 904 904 volume = {67},
pages = {424-438}, 905 905 pages = {424-438},
year = {2023}, 906 906 year = {2023},
issn = {0278-6125}, 907 907 issn = {0278-6125},
doi = {https://doi.org/10.1016/j.jmsy.2023.02.015}, 908 908 doi = {https://doi.org/10.1016/j.jmsy.2023.02.015},
url = {https://www.sciencedirect.com/science/article/pii/S0278612523000390}, 909 909 url = {https://www.sciencedirect.com/science/article/pii/S0278612523000390},
author = {Weiye Li and Songping He and Xinyong Mao and Bin Li and Chaochao Qiu and Jinwen Yu and Fangyu Peng and Xin Tan}, 910 910 author = {Weiye Li and Songping He and Xinyong Mao and Bin Li and Chaochao Qiu and Jinwen Yu and Fangyu Peng and Xin Tan},
keywords = {Surface roughness, Cutting efficiency, Machining parameters optimization, Graph attention network, Multi-agent reinforcement learning, Evolutionary learning}, 911 911 keywords = {Surface roughness, Cutting efficiency, Machining parameters optimization, Graph attention network, Multi-agent reinforcement learning, Evolutionary learning},
abstract = {Improving machining quality and production efficiency is the focus of the manufacturing industry. How to obtain efficient machining parameters under multiple constraints such as machining quality is a severe challenge for manufacturing industry. In this paper, a multi-agent evolutionary reinforcement learning method (MAERL) is proposed to optimize the machining parameters for high quality and high efficiency machining by combining the graph neural network and reinforcement learning. Firstly, a bootstrap aggregating graph attention network (Bagging-GAT) based roughness estimation method for machined surface is proposed, which combines the structural knowledge between machining parameters and vibration features. Secondly, a mathematical model of machining parameters optimization problem is established, which is formalized into Markov decision process (MDP), and a multi-agent reinforcement learning method is proposed to solve the MDP problem, and evolutionary learning is introduced to improve the stability of multi-agent training. Finally, a series of experiments were carried out on the commutator production line, and the results show that the proposed Bagging-GAT-based method can improve the prediction effect by about 25% in the case of small samples, and the MAERL-based optimization method can better deal with the coupling problem of reward function in the optimization process. Compared with the classical optimization method, the optimization effect is improved by 13% and a lot of optimization time is saved.} 912 912 abstract = {Improving machining quality and production efficiency is the focus of the manufacturing industry. How to obtain efficient machining parameters under multiple constraints such as machining quality is a severe challenge for manufacturing industry. In this paper, a multi-agent evolutionary reinforcement learning method (MAERL) is proposed to optimize the machining parameters for high quality and high efficiency machining by combining the graph neural network and reinforcement learning. Firstly, a bootstrap aggregating graph attention network (Bagging-GAT) based roughness estimation method for machined surface is proposed, which combines the structural knowledge between machining parameters and vibration features. Secondly, a mathematical model of machining parameters optimization problem is established, which is formalized into Markov decision process (MDP), and a multi-agent reinforcement learning method is proposed to solve the MDP problem, and evolutionary learning is introduced to improve the stability of multi-agent training. Finally, a series of experiments were carried out on the commutator production line, and the results show that the proposed Bagging-GAT-based method can improve the prediction effect by about 25% in the case of small samples, and the MAERL-based optimization method can better deal with the coupling problem of reward function in the optimization process. Compared with the classical optimization method, the optimization effect is improved by 13% and a lot of optimization time is saved.}
} 913 913 }
914 914
@inproceedings{10.1145/3290605.3300912, 915 915 @inproceedings{10.1145/3290605.3300912,
author = {Kim, Yea-Seul and Walls, Logan A. and Krafft, Peter and Hullman, Jessica}, 916 916 author = {Kim, Yea-Seul and Walls, Logan A. and Krafft, Peter and Hullman, Jessica},
title = {A Bayesian Cognition Approach to Improve Data Visualization}, 917 917 title = {A Bayesian Cognition Approach to Improve Data Visualization},
year = {2019}, 918 918 year = {2019},
isbn = {9781450359702}, 919 919 isbn = {9781450359702},
publisher = {Association for Computing Machinery}, 920 920 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 921 921 address = {New York, NY, USA},
url = {https://doi.org/10.1145/3290605.3300912}, 922 922 url = {https://doi.org/10.1145/3290605.3300912},
doi = {10.1145/3290605.3300912}, 923 923 doi = {10.1145/3290605.3300912},
abstract = {People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.}, 924 924 abstract = {People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems}, 925 925 booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
pages = {1–14}, 926 926 pages = {1–14},
numpages = {14}, 927 927 numpages = {14},
keywords = {bayesian cognition, uncertainty elicitation, visualization}, 928 928 keywords = {bayesian cognition, uncertainty elicitation, visualization},
location = {Glasgow, Scotland Uk}, 929 929 location = {Glasgow, Scotland Uk},
series = {CHI '19} 930 930 series = {CHI '19}
} 931 931 }
932 932
@article{DYER2024104827, 933 933 @article{DYER2024104827,
title = {Black-box Bayesian inference for agent-based models}, 934 934 title = {Black-box Bayesian inference for agent-based models},
journal = {Journal of Economic Dynamics and Control}, 935 935 journal = {Journal of Economic Dynamics and Control},
volume = {161}, 936 936 volume = {161},
pages = {104827}, 937 937 pages = {104827},
year = {2024}, 938 938 year = {2024},
issn = {0165-1889}, 939 939 issn = {0165-1889},
doi = {https://doi.org/10.1016/j.jedc.2024.104827}, 940 940 doi = {https://doi.org/10.1016/j.jedc.2024.104827},
url = {https://www.sciencedirect.com/science/article/pii/S0165188924000198}, 941 941 url = {https://www.sciencedirect.com/science/article/pii/S0165188924000198},
author = {Joel Dyer and Patrick Cannon and J. Doyne Farmer and Sebastian M. Schmon}, 942 942 author = {Joel Dyer and Patrick Cannon and J. Doyne Farmer and Sebastian M. Schmon},
keywords = {Agent-based models, Bayesian inference, Neural networks, Parameter estimation, Simulation-based inference, Time series}, 943 943 keywords = {Agent-based models, Bayesian inference, Neural networks, Parameter estimation, Simulation-based inference, Time series},
abstract = {Simulation models, in particular agent-based models, are gaining popularity in economics and the social sciences. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviours of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in real-world modelling and decision-making scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. A number of recent works have sought to address this problem through the application of likelihood-free inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for large-scale simulations in economics and the social sciences, and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of simulation-efficient black-box approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present a number of benchmarking experiments in which we demonstrate that neural network-based black-box methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate or even non-Euclidean time-series data. In addition, we suggest appropriate assessment criteria for use in future benchmarking of approximate Bayesian inference procedures for simulation models in economics and the social sciences.} 944 944 abstract = {Simulation models, in particular agent-based models, are gaining popularity in economics and the social sciences. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviours of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in real-world modelling and decision-making scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. A number of recent works have sought to address this problem through the application of likelihood-free inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for large-scale simulations in economics and the social sciences, and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of simulation-efficient black-box approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present a number of benchmarking experiments in which we demonstrate that neural network-based black-box methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate or even non-Euclidean time-series data. In addition, we suggest appropriate assessment criteria for use in future benchmarking of approximate Bayesian inference procedures for simulation models in economics and the social sciences.}
} 945 945 }
946 946
@Article{Nikpour2021, 947 947 @Article{Nikpour2021,
author={Nikpour, Hoda 948 948 author={Nikpour, Hoda
and Aamodt, Agnar}, 949 949 and Aamodt, Agnar},
title={Inference and reasoning in a Bayesian knowledge-intensive CBR system}, 950 950 title={Inference and reasoning in a Bayesian knowledge-intensive CBR system},
journal={Progress in Artificial Intelligence}, 951 951 journal={Progress in Artificial Intelligence},
year={2021}, 952 952 year={2021},
month={Mar}, 953 953 month={Mar},
day={01}, 954 954 day={01},
volume={10}, 955 955 volume={10},
number={1}, 956 956 number={1},
pages={49-63}, 957 957 pages={49-63},
abstract={This paper presents the inference and reasoning methods in a Bayesian supported knowledge-intensive case-based reasoning (CBR) system called BNCreek. The inference and reasoning process in this system is a combination of three methods. The semantic network inference methods and the CBR method are employed to handle the difficulties of inferencing and reasoning in uncertain domains. The Bayesian network inference methods are employed to make the process more accurate. An experiment from oil well drilling as a complex and uncertain application domain is conducted. The system is evaluated against expert estimations and compared with seven other corresponding systems. The normalized discounted cumulative gain (NDCG) as a rank-based metric, the weighted error (WE), and root-square error (RSE) as the statistical metrics are employed to evaluate different aspects of the system capabilities. The results show the efficiency of the developed inference and reasoning methods.}, 958 958 abstract={This paper presents the inference and reasoning methods in a Bayesian supported knowledge-intensive case-based reasoning (CBR) system called BNCreek. The inference and reasoning process in this system is a combination of three methods. The semantic network inference methods and the CBR method are employed to handle the difficulties of inferencing and reasoning in uncertain domains. The Bayesian network inference methods are employed to make the process more accurate. An experiment from oil well drilling as a complex and uncertain application domain is conducted. The system is evaluated against expert estimations and compared with seven other corresponding systems. The normalized discounted cumulative gain (NDCG) as a rank-based metric, the weighted error (WE), and root-square error (RSE) as the statistical metrics are employed to evaluate different aspects of the system capabilities. The results show the efficiency of the developed inference and reasoning methods.},
issn={2192-6360}, 959 959 issn={2192-6360},
doi={10.1007/s13748-020-00223-1}, 960 960 doi={10.1007/s13748-020-00223-1},
url={https://doi.org/10.1007/s13748-020-00223-1} 961 961 url={https://doi.org/10.1007/s13748-020-00223-1}
} 962 962 }
963 963
@article{PRESCOTT2024112577, 964 964 @article{PRESCOTT2024112577,
title = {Efficient multifidelity likelihood-free Bayesian inference with adaptive computational resource allocation}, 965 965 title = {Efficient multifidelity likelihood-free Bayesian inference with adaptive computational resource allocation},
journal = {Journal of Computational Physics}, 966 966 journal = {Journal of Computational Physics},
volume = {496}, 967 967 volume = {496},
pages = {112577}, 968 968 pages = {112577},
year = {2024}, 969 969 year = {2024},
issn = {0021-9991}, 970 970 issn = {0021-9991},
doi = {https://doi.org/10.1016/j.jcp.2023.112577}, 971 971 doi = {https://doi.org/10.1016/j.jcp.2023.112577},
url = {https://www.sciencedirect.com/science/article/pii/S0021999123006721}, 972 972 url = {https://www.sciencedirect.com/science/article/pii/S0021999123006721},
author = {Thomas P. Prescott and David J. Warne and Ruth E. Baker}, 973 973 author = {Thomas P. Prescott and David J. Warne and Ruth E. Baker},
keywords = {Likelihood-free Bayesian inference, Multifidelity approaches}, 974 974 keywords = {Likelihood-free Bayesian inference, Multifidelity approaches},
abstract = {Likelihood-free Bayesian inference algorithms are popular methods for inferring the parameters of complex stochastic models with intractable likelihoods. These algorithms characteristically rely heavily on repeated model simulations. However, whenever the computational cost of simulation is even moderately expensive, the significant burden incurred by likelihood-free algorithms leaves them infeasible for many practical applications. The multifidelity approach has been introduced in the context of approximate Bayesian computation to reduce the simulation burden of likelihood-free inference without loss of accuracy, by using the information provided by simulating computationally cheap, approximate models in place of the model of interest. In this work we demonstrate that multifidelity techniques can be applied in the general likelihood-free Bayesian inference setting. Analytical results on the optimal allocation of computational resources to simulations at different levels of fidelity are derived, and subsequently implemented practically. We provide an adaptive multifidelity likelihood-free inference algorithm that learns the relationships between models at different fidelities and adapts resource allocation accordingly, and demonstrate that this algorithm produces posterior estimates with near-optimal efficiency.} 975 975 abstract = {Likelihood-free Bayesian inference algorithms are popular methods for inferring the parameters of complex stochastic models with intractable likelihoods. These algorithms characteristically rely heavily on repeated model simulations. However, whenever the computational cost of simulation is even moderately expensive, the significant burden incurred by likelihood-free algorithms leaves them infeasible for many practical applications. The multifidelity approach has been introduced in the context of approximate Bayesian computation to reduce the simulation burden of likelihood-free inference without loss of accuracy, by using the information provided by simulating computationally cheap, approximate models in place of the model of interest. In this work we demonstrate that multifidelity techniques can be applied in the general likelihood-free Bayesian inference setting. Analytical results on the optimal allocation of computational resources to simulations at different levels of fidelity are derived, and subsequently implemented practically. We provide an adaptive multifidelity likelihood-free inference algorithm that learns the relationships between models at different fidelities and adapts resource allocation accordingly, and demonstrate that this algorithm produces posterior estimates with near-optimal efficiency.}
} 976 976 }
977 977
@article{RISTIC202030, 978 978 @article{RISTIC202030,
title = {A tutorial on uncertainty modeling for machine reasoning}, 979 979 title = {A tutorial on uncertainty modeling for machine reasoning},
journal = {Information Fusion}, 980 980 journal = {Information Fusion},
volume = {55}, 981 981 volume = {55},
pages = {30-44}, 982 982 pages = {30-44},
year = {2020}, 983 983 year = {2020},
issn = {1566-2535}, 984 984 issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2019.08.001}, 985 985 doi = {https://doi.org/10.1016/j.inffus.2019.08.001},
url = {https://www.sciencedirect.com/science/article/pii/S1566253519301976}, 986 986 url = {https://www.sciencedirect.com/science/article/pii/S1566253519301976},
author = {Branko Ristic and Christopher Gilliam and Marion Byrne and Alessio Benavoli}, 987 987 author = {Branko Ristic and Christopher Gilliam and Marion Byrne and Alessio Benavoli},
keywords = {Information fusion, Uncertainty, Imprecision, Model based classification, Bayesian, Random sets, Belief function theory, Possibility functions, Imprecise probability}, 988 988 keywords = {Information fusion, Uncertainty, Imprecision, Model based classification, Bayesian, Random sets, Belief function theory, Possibility functions, Imprecise probability},
abstract = {Increasingly we rely on machine intelligence for reasoning and decision making under uncertainty. This tutorial reviews the prevalent methods for model-based autonomous decision making based on observations and prior knowledge, primarily in the context of classification. Both observations and the knowledge-base available for reasoning are treated as being uncertain. Accordingly, the central themes of this tutorial are quantitative modeling of uncertainty, the rules required to combine such uncertain information, and the task of decision making under uncertainty. The paper covers the main approaches to uncertain knowledge representation and reasoning, in particular, Bayesian probability theory, possibility theory, reasoning based on belief functions and finally imprecise probability theory. The main feature of the tutorial is that it illustrates various approaches with several testing scenarios, and provides MATLAB solutions for them as a supplementary material for an interested reader.} 989 989 abstract = {Increasingly we rely on machine intelligence for reasoning and decision making under uncertainty. This tutorial reviews the prevalent methods for model-based autonomous decision making based on observations and prior knowledge, primarily in the context of classification. Both observations and the knowledge-base available for reasoning are treated as being uncertain. Accordingly, the central themes of this tutorial are quantitative modeling of uncertainty, the rules required to combine such uncertain information, and the task of decision making under uncertainty. The paper covers the main approaches to uncertain knowledge representation and reasoning, in particular, Bayesian probability theory, possibility theory, reasoning based on belief functions and finally imprecise probability theory. The main feature of the tutorial is that it illustrates various approaches with several testing scenarios, and provides MATLAB solutions for them as a supplementary material for an interested reader.}
} 990 990 }
991 991
@article{CICIRELLO2022108619, 992 992 @article{CICIRELLO2022108619,
title = {Machine learning based optimization for interval uncertainty propagation}, 993 993 title = {Machine learning based optimization for interval uncertainty propagation},
journal = {Mechanical Systems and Signal Processing}, 994 994 journal = {Mechanical Systems and Signal Processing},
volume = {170}, 995 995 volume = {170},
pages = {108619}, 996 996 pages = {108619},
year = {2022}, 997 997 year = {2022},
issn = {0888-3270}, 998 998 issn = {0888-3270},
doi = {https://doi.org/10.1016/j.ymssp.2021.108619}, 999 999 doi = {https://doi.org/10.1016/j.ymssp.2021.108619},
url = {https://www.sciencedirect.com/science/article/pii/S0888327021009493}, 1000 1000 url = {https://www.sciencedirect.com/science/article/pii/S0888327021009493},
author = {Alice Cicirello and Filippo Giunta}, 1001 1001 author = {Alice Cicirello and Filippo Giunta},
keywords = {Bounded uncertainty, Bayesian optimization, Expensive-to-evaluate deterministic computer models, Gaussian process, Communicating uncertainty}, 1002 1002 keywords = {Bounded uncertainty, Bayesian optimization, Expensive-to-evaluate deterministic computer models, Gaussian process, Communicating uncertainty},
abstract = {Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems described by expensive-to-evaluate deterministic computer models with parameters defined as interval variables. These approaches employ a machine learning based optimization strategy, the so-called Bayesian optimization, for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The lack of knowledge caused by not evaluating the response function for all the possible combinations of the interval variables is accounted for by developing a probabilistic description of the response variable itself by using a Gaussian Process regression model. An iterative procedure is developed for selecting a small number of simulations to be evaluated for updating this statistical model by using well-established acquisition functions and to assess the response bounds. In both approaches, an initial training dataset is defined. While one approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable, the other one builds iteratively a single training dataset. Consequently, the two approaches will produce different bound estimates at each iteration. The upper and lower response bounds are expressed as point estimates obtained from the mean function of the posterior distribution. Moreover, a confidence interval on each estimate is provided for effectively communicating to engineers when these estimates are obtained at a combination of the interval variables for which no deterministic simulation has been run. Finally, two metrics are proposed to define conditions for assessing if the predicted bound estimates can be considered satisfactory. The applicability of these two approaches is illustrated with two numerical applications, one focusing on vibration and the other on vibro-acoustics.} 1003 1003 abstract = {Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems described by expensive-to-evaluate deterministic computer models with parameters defined as interval variables. These approaches employ a machine learning based optimization strategy, the so-called Bayesian optimization, for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The lack of knowledge caused by not evaluating the response function for all the possible combinations of the interval variables is accounted for by developing a probabilistic description of the response variable itself by using a Gaussian Process regression model. An iterative procedure is developed for selecting a small number of simulations to be evaluated for updating this statistical model by using well-established acquisition functions and to assess the response bounds. In both approaches, an initial training dataset is defined. While one approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable, the other one builds iteratively a single training dataset. Consequently, the two approaches will produce different bound estimates at each iteration. The upper and lower response bounds are expressed as point estimates obtained from the mean function of the posterior distribution. Moreover, a confidence interval on each estimate is provided for effectively communicating to engineers when these estimates are obtained at a combination of the interval variables for which no deterministic simulation has been run. Finally, two metrics are proposed to define conditions for assessing if the predicted bound estimates can be considered satisfactory. The applicability of these two approaches is illustrated with two numerical applications, one focusing on vibration and the other on vibro-acoustics.}
} 1004 1004 }
1005 1005
@INPROCEEDINGS{9278071, 1006 1006 @INPROCEEDINGS{9278071,
author={Petit, Maxime and Dellandrea, Emmanuel and Chen, Liming}, 1007 1007 author={Petit, Maxime and Dellandrea, Emmanuel and Chen, Liming},
booktitle={2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, 1008 1008 booktitle={2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)},
title={Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction}, 1009 1009 title={Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction},
year={2020}, 1010 1010 year={2020},
volume={}, 1011 1011 volume={},
number={}, 1012 1012 number={},
pages={1-8}, 1013 1013 pages={1-8},
keywords={Optimization;Robots;Task analysis;Bayes methods;Visualization;Service robots;Cognition;developmental robotics;long-term memory;meta learning;hyperparmeters automatic optimization;case-based reasoning}, 1014 1014 keywords={Optimization;Robots;Task analysis;Bayes methods;Visualization;Service robots;Cognition;developmental robotics;long-term memory;meta learning;hyperparmeters automatic optimization;case-based reasoning},
doi={10.1109/ICDL-EpiRob48136.2020.9278071} 1015 1015 doi={10.1109/ICDL-EpiRob48136.2020.9278071}
} 1016 1016 }
1017 1017
@article{LI2023477, 1018 1018 @article{LI2023477,
title = {Hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network based on improved KMeans partition method}, 1019 1019 title = {Hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network based on improved KMeans partition method},
journal = {Energy Reports}, 1020 1020 journal = {Energy Reports},
volume = {9}, 1021 1021 volume = {9},
pages = {477-485}, 1022 1022 pages = {477-485},
year = {2023}, 1023 1023 year = {2023},
note = {2022 The 3rd International Conference on Power and Electrical Engineering}, 1024 1024 note = {2022 The 3rd International Conference on Power and Electrical Engineering},
issn = {2352-4847}, 1025 1025 issn = {2352-4847},
doi = {https://doi.org/10.1016/j.egyr.2023.05.161}, 1026 1026 doi = {https://doi.org/10.1016/j.egyr.2023.05.161},
url = {https://www.sciencedirect.com/science/article/pii/S2352484723009137}, 1027 1027 url = {https://www.sciencedirect.com/science/article/pii/S2352484723009137},
author = {Jingqi Li and Junlin Li and Dan Wang and Chengxiong Mao and Zhitao Guan and Zhichao Liu and Miaomiao Du and Yuanzhuo Qi and Lexiang Wang and Wenge Liu and Pengfei Tang}, 1028 1028 author = {Jingqi Li and Junlin Li and Dan Wang and Chengxiong Mao and Zhitao Guan and Zhichao Liu and Miaomiao Du and Yuanzhuo Qi and Lexiang Wang and Wenge Liu and Pengfei Tang},
keywords = {Closed-loop device, Distribution network partition, Device planning, Hierarchical planning, Improved KMeans partition method}, 1029 1029 keywords = {Closed-loop device, Distribution network partition, Device planning, Hierarchical planning, Improved KMeans partition method},
abstract = {To improve the reliability of power supply, this paper proposes a hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network. Based on the geographic location and load situation of the distribution network area, an improved KMeans partition method is used to partition the area in the upper layer. In the lower layer, an intelligent algorithm is adopted to decide the numbers and placement locations of mobile low-voltage contact boxes and mobile seamless closed-loop load transfer devices in each partition with the goal of the highest closed-loop safety, the greatest improvement in annual power outage amount and the lowest cost. Finally, the feasibility and effectiveness of the proposed strategy are proved by an example.} 1030 1030 abstract = {To improve the reliability of power supply, this paper proposes a hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network. Based on the geographic location and load situation of the distribution network area, an improved KMeans partition method is used to partition the area in the upper layer. In the lower layer, an intelligent algorithm is adopted to decide the numbers and placement locations of mobile low-voltage contact boxes and mobile seamless closed-loop load transfer devices in each partition with the goal of the highest closed-loop safety, the greatest improvement in annual power outage amount and the lowest cost. Finally, the feasibility and effectiveness of the proposed strategy are proved by an example.}
} 1031 1031 }
1032 1032
@article{SAXENA2024100838, 1033 1033 @article{SAXENA2024100838,
title = {Hybrid KNN-SVM machine learning approach for solar power forecasting}, 1034 1034 title = {Hybrid KNN-SVM machine learning approach for solar power forecasting},
journal = {Environmental Challenges}, 1035 1035 journal = {Environmental Challenges},
volume = {14}, 1036 1036 volume = {14},
pages = {100838}, 1037 1037 pages = {100838},
year = {2024}, 1038 1038 year = {2024},
issn = {2667-0100}, 1039 1039 issn = {2667-0100},
doi = {https://doi.org/10.1016/j.envc.2024.100838}, 1040 1040 doi = {https://doi.org/10.1016/j.envc.2024.100838},
url = {https://www.sciencedirect.com/science/article/pii/S2667010024000040}, 1041 1041 url = {https://www.sciencedirect.com/science/article/pii/S2667010024000040},
author = {Nishant Saxena and Rahul Kumar and Yarrapragada K S S Rao and Dilbag Singh Mondloe and Nishikant Kishor Dhapekar and Abhishek Sharma and Anil Singh Yadav}, 1042 1042 author = {Nishant Saxena and Rahul Kumar and Yarrapragada K S S Rao and Dilbag Singh Mondloe and Nishikant Kishor Dhapekar and Abhishek Sharma and Anil Singh Yadav},
keywords = {Solar power forecasting, Hybrid model, KNN, Optimization, Solar energy, SVM}, 1043 1043 keywords = {Solar power forecasting, Hybrid model, KNN, Optimization, Solar energy, SVM},
abstract = {Predictions about solar power will have a significant impact on large-scale renewable energy plants. Photovoltaic (PV) power generation forecasting is particularly sensitive to measuring the uncertainty in weather conditions. Although several conventional techniques like long short-term memory (LSTM), support vector machine (SVM), etc. are available, but due to some restrictions, their application is limited. To enhance the precision of forecasting solar power from solar farms, a hybrid machine learning model that includes blends of the K-Nearest Neighbor (KNN) machine learning technique with the SVM to increase reliability for power system operators is proposed in this investigation. The conventional LSTM technique is also implemented to compare the performance of the proposed hybrid technique. The suggested hybrid model is improved by the use of structural diversity and data diversity in KNN and SVM, respectively. For the solar power predictions, the suggested method was tested on the Jodhpur real-time series dataset obtained from the data centers of weather stations using Meteonorm. The data set includes metrics such as Hourly Average Temperature (HAT), Hourly Total Sunlight Duration (HTSD), Hourly Total Global Solar Radiation (HTGSR), and Hourly Total Photovoltaic Energy Generation (HTPEG). The collated data has been segmented into training data, validation data, and testing data. Furthermore, the proposed technique performed better when evaluated on the three performance indices, viz., accuracy, sensitivity, and specificity. Compared with the conventional LSTM technique, the hybrid technique improved the prediction with 98\% accuracy.} 1044 1044 abstract = {Predictions about solar power will have a significant impact on large-scale renewable energy plants. Photovoltaic (PV) power generation forecasting is particularly sensitive to measuring the uncertainty in weather conditions. Although several conventional techniques like long short-term memory (LSTM), support vector machine (SVM), etc. are available, but due to some restrictions, their application is limited. To enhance the precision of forecasting solar power from solar farms, a hybrid machine learning model that includes blends of the K-Nearest Neighbor (KNN) machine learning technique with the SVM to increase reliability for power system operators is proposed in this investigation. The conventional LSTM technique is also implemented to compare the performance of the proposed hybrid technique. The suggested hybrid model is improved by the use of structural diversity and data diversity in KNN and SVM, respectively. For the solar power predictions, the suggested method was tested on the Jodhpur real-time series dataset obtained from the data centers of weather stations using Meteonorm. The data set includes metrics such as Hourly Average Temperature (HAT), Hourly Total Sunlight Duration (HTSD), Hourly Total Global Solar Radiation (HTGSR), and Hourly Total Photovoltaic Energy Generation (HTPEG). The collated data has been segmented into training data, validation data, and testing data. Furthermore, the proposed technique performed better when evaluated on the three performance indices, viz., accuracy, sensitivity, and specificity. Compared with the conventional LSTM technique, the hybrid technique improved the prediction with 98\% accuracy.}
} 1045 1045 }
1046 1046
@article{RAKESH2023100898, 1047 1047 @article{RAKESH2023100898,
title = {Moving object detection using modified GMM based background subtraction}, 1048 1048 title = {Moving object detection using modified GMM based background subtraction},
journal = {Measurement: Sensors}, 1049 1049 journal = {Measurement: Sensors},
volume = {30}, 1050 1050 volume = {30},
pages = {100898}, 1051 1051 pages = {100898},
year = {2023}, 1052 1052 year = {2023},
issn = {2665-9174}, 1053 1053 issn = {2665-9174},
doi = {https://doi.org/10.1016/j.measen.2023.100898}, 1054 1054 doi = {https://doi.org/10.1016/j.measen.2023.100898},
url = {https://www.sciencedirect.com/science/article/pii/S2665917423002349}, 1055 1055 url = {https://www.sciencedirect.com/science/article/pii/S2665917423002349},
author = {S. Rakesh and Nagaratna P. Hegde and M. {Venu Gopalachari} and D. Jayaram and Bhukya Madhu and Mohd Abdul Hameed and Ramdas Vankdothu and L.K. {Suresh Kumar}}, 1056 1056 author = {S. Rakesh and Nagaratna P. Hegde and M. {Venu Gopalachari} and D. Jayaram and Bhukya Madhu and Mohd Abdul Hameed and Ramdas Vankdothu and L.K. {Suresh Kumar}},
keywords = {Background subtraction, Gaussian mixture models, Intelligent video surveillance, Object detection}, 1057 1057 keywords = {Background subtraction, Gaussian mixture models, Intelligent video surveillance, Object detection},
abstract = {Academics have become increasingly interested in creating cutting-edge technologies to enhance Intelligent Video Surveillance (IVS) performance in terms of accuracy, speed, complexity, and deployment. It has been noted that precise object detection is the only way for IVS to function well in higher level applications including event interpretation, tracking, classification, and activity recognition. Through the use of cutting-edge techniques, the current study seeks to improve the performance accuracy of object detection techniques based on Gaussian Mixture Models (GMM). It is achieved by developing crucial phases in the object detecting process. In this study, it is discussed how to model each pixel as a mixture of Gaussians and how to update the model using an online k-means approximation. The adaptive mixture model's Gaussian distributions are then analyzed to identify which ones are more likely to be the product of a background process. Each pixel is categorized according to whether the background model is thought to include the Gaussian distribution that best depicts it.} 1058 1058 abstract = {Academics have become increasingly interested in creating cutting-edge technologies to enhance Intelligent Video Surveillance (IVS) performance in terms of accuracy, speed, complexity, and deployment. It has been noted that precise object detection is the only way for IVS to function well in higher level applications including event interpretation, tracking, classification, and activity recognition. Through the use of cutting-edge techniques, the current study seeks to improve the performance accuracy of object detection techniques based on Gaussian Mixture Models (GMM). It is achieved by developing crucial phases in the object detecting process. In this study, it is discussed how to model each pixel as a mixture of Gaussians and how to update the model using an online k-means approximation. The adaptive mixture model's Gaussian distributions are then analyzed to identify which ones are more likely to be the product of a background process. Each pixel is categorized according to whether the background model is thought to include the Gaussian distribution that best depicts it.}
} 1059 1059 }
1060 1060
@article{JIAO2022540, 1061 1061 @article{JIAO2022540,
title = {Interpretable fuzzy clustering using unsupervised fuzzy decision trees}, 1062 1062 title = {Interpretable fuzzy clustering using unsupervised fuzzy decision trees},
journal = {Information Sciences}, 1063 1063 journal = {Information Sciences},
volume = {611}, 1064 1064 volume = {611},
pages = {540-563}, 1065 1065 pages = {540-563},
year = {2022}, 1066 1066 year = {2022},
issn = {0020-0255}, 1067 1067 issn = {0020-0255},
doi = {https://doi.org/10.1016/j.ins.2022.08.077}, 1068 1068 doi = {https://doi.org/10.1016/j.ins.2022.08.077},
url = {https://www.sciencedirect.com/science/article/pii/S0020025522009872}, 1069 1069 url = {https://www.sciencedirect.com/science/article/pii/S0020025522009872},
author = {Lianmeng Jiao and Haoyu Yang and Zhun-ga Liu and Quan Pan}, 1070 1070 author = {Lianmeng Jiao and Haoyu Yang and Zhun-ga Liu and Quan Pan},
keywords = {Fuzzy clustering, Interpretable clustering, Unsupervised decision tree, Cluster merging}, 1071 1071 keywords = {Fuzzy clustering, Interpretable clustering, Unsupervised decision tree, Cluster merging},
abstract = {In clustering process, fuzzy partition performs better than hard partition when the boundaries between clusters are vague. Whereas, traditional fuzzy clustering algorithms produce less interpretable results, limiting their application in security, privacy, and ethics fields. To that end, this paper proposes an interpretable fuzzy clustering algorithm—fuzzy decision tree-based clustering which combines the flexibility of fuzzy partition with the interpretability of the decision tree. We constructed an unsupervised multi-way fuzzy decision tree to achieve the interpretability of clustering, in which each cluster is determined by one or several paths from the root to leaf nodes. The proposed algorithm comprises three main modules: feature and cutting point-selection, node fuzzy splitting, and cluster merging. The first two modules are repeated to generate an initial unsupervised decision tree, and the final module is designed to combine similar leaf nodes to form the final compact clustering model. Our algorithm optimizes an internal clustering validation metric to automatically determine the number of clusters without their initial positions. The synthetic and benchmark datasets were used to test the performance of the proposed algorithm. Furthermore, we provided two examples demonstrating its interest in solving practical problems.} 1072 1072 abstract = {In clustering process, fuzzy partition performs better than hard partition when the boundaries between clusters are vague. Whereas, traditional fuzzy clustering algorithms produce less interpretable results, limiting their application in security, privacy, and ethics fields. To that end, this paper proposes an interpretable fuzzy clustering algorithm—fuzzy decision tree-based clustering which combines the flexibility of fuzzy partition with the interpretability of the decision tree. We constructed an unsupervised multi-way fuzzy decision tree to achieve the interpretability of clustering, in which each cluster is determined by one or several paths from the root to leaf nodes. The proposed algorithm comprises three main modules: feature and cutting point-selection, node fuzzy splitting, and cluster merging. The first two modules are repeated to generate an initial unsupervised decision tree, and the final module is designed to combine similar leaf nodes to form the final compact clustering model. Our algorithm optimizes an internal clustering validation metric to automatically determine the number of clusters without their initial positions. The synthetic and benchmark datasets were used to test the performance of the proposed algorithm. Furthermore, we provided two examples demonstrating its interest in solving practical problems.}
} 1073 1073 }
1074 1074
@article{ARNAUGONZALEZ2023101516, 1075 1075 @article{ARNAUGONZALEZ2023101516,
title = {A methodological approach to enable natural language interaction in an Intelligent Tutoring System}, 1076 1076 title = {A methodological approach to enable natural language interaction in an Intelligent Tutoring System},
journal = {Computer Speech and Language}, 1077 1077 journal = {Computer Speech and Language},
volume = {81}, 1078 1078 volume = {81},
pages = {101516}, 1079 1079 pages = {101516},
year = {2023}, 1080 1080 year = {2023},
issn = {0885-2308}, 1081 1081 issn = {0885-2308},
doi = {https://doi.org/10.1016/j.csl.2023.101516}, 1082 1082 doi = {https://doi.org/10.1016/j.csl.2023.101516},
url = {https://www.sciencedirect.com/science/article/pii/S0885230823000359}, 1083 1083 url = {https://www.sciencedirect.com/science/article/pii/S0885230823000359},
author = {Pablo Arnau-González and Miguel Arevalillo-Herráez and Romina Albornoz-De Luise and David Arnau}, 1084 1084 author = {Pablo Arnau-González and Miguel Arevalillo-Herráez and Romina Albornoz-De Luise and David Arnau},
keywords = {Intelligent tutoring systems (ITS), Interactive learning environments (ILE), Conversational agents, Rasa, Natural language understanding (NLU), Natural language processing (NLP)}, 1085 1085 keywords = {Intelligent tutoring systems (ITS), Interactive learning environments (ILE), Conversational agents, Rasa, Natural language understanding (NLU), Natural language processing (NLP)},
abstract = {In this paper, we present and evaluate the recent incorporation of a conversational agent into an Intelligent Tutoring System (ITS), using the open-source machine learning framework Rasa. Once it has been appropriately trained, this tool is capable of identifying the intention of a given text input and extracting the relevant entities related to the message content. We describe both the generation of a realistic training set in Spanish language that enables the creation of the required Natural Language Understanding (NLU) models and the evaluation of the resulting system. For the generation of the training set, we have followed a methodology that can be easily exported to other ITS. The model evaluation shows that the conversational agent can correctly identify the majority of the user intents, reporting an f1-score above 95%, and cooperate with the ITS to produce a consistent dialogue flow that makes interaction more natural.} 1086 1086 abstract = {In this paper, we present and evaluate the recent incorporation of a conversational agent into an Intelligent Tutoring System (ITS), using the open-source machine learning framework Rasa. Once it has been appropriately trained, this tool is capable of identifying the intention of a given text input and extracting the relevant entities related to the message content. We describe both the generation of a realistic training set in Spanish language that enables the creation of the required Natural Language Understanding (NLU) models and the evaluation of the resulting system. For the generation of the training set, we have followed a methodology that can be easily exported to other ITS. The model evaluation shows that the conversational agent can correctly identify the majority of the user intents, reporting an f1-score above 95%, and cooperate with the ITS to produce a consistent dialogue flow that makes interaction more natural.}
} 1087 1087 }
1088 1088
@article{MAO20224065, 1089 1089 @article{MAO20224065,
title = {An Exploratory Approach to Intelligent Quiz Question Recommendation}, 1090 1090 title = {An Exploratory Approach to Intelligent Quiz Question Recommendation},
journal = {Procedia Computer Science}, 1091 1091 journal = {Procedia Computer Science},
volume = {207}, 1092 1092 volume = {207},
pages = {4065-4074}, 1093 1093 pages = {4065-4074},
year = {2022}, 1094 1094 year = {2022},
note = {Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 26th International Conference KES2022}, 1095 1095 note = {Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 26th International Conference KES2022},
issn = {1877-0509}, 1096 1096 issn = {1877-0509},
doi = {https://doi.org/10.1016/j.procs.2022.09.469}, 1097 1097 doi = {https://doi.org/10.1016/j.procs.2022.09.469},
url = {https://www.sciencedirect.com/science/article/pii/S1877050922013631}, 1098 1098 url = {https://www.sciencedirect.com/science/article/pii/S1877050922013631},
author = {Kejie Mao and Qiwen Dong and Ye Wang and Daocheng Honga}, 1099 1099 author = {Kejie Mao and Qiwen Dong and Ye Wang and Daocheng Honga},
keywords = {question recommendation, two-sided recommender systems, reinforcement learning, intelligent tutoring}, 1100 1100 keywords = {question recommendation, two-sided recommender systems, reinforcement learning, intelligent tutoring},
abstract = {With the rapid advancement of ICT, the digital transformation on education is greatly accelerating in various applications. As a particularly prominent application of digital education, quiz question recommendation is playing a vital role in precision teaching, smart tutoring, and personalized learning. However, the looming challenge of quiz question recommender for students is to satisfy the question diversity demands for students ZPD (the zone of proximal development) stage dynamically online. Therefore, we propose to formalize quiz question recommendation with a novel approach of reinforcement learning based two-sided recommender system. We develop a recommendation framework RTR (Reinforcement-Learning based Two-sided Recommender Systems) for taking into account the interests of both sides of the system, learning and adapting to those interests in real time, and resulting in more satisfactory recommended content. This established recommendation framework captures question characters and student dynamic preferences by considering the emergence of both sides of the system, and it yields a better learning experience in the context of practical quiz question generation.} 1101 1101 abstract = {With the rapid advancement of ICT, the digital transformation on education is greatly accelerating in various applications. As a particularly prominent application of digital education, quiz question recommendation is playing a vital role in precision teaching, smart tutoring, and personalized learning. However, the looming challenge of quiz question recommender for students is to satisfy the question diversity demands for students ZPD (the zone of proximal development) stage dynamically online. Therefore, we propose to formalize quiz question recommendation with a novel approach of reinforcement learning based two-sided recommender system. We develop a recommendation framework RTR (Reinforcement-Learning based Two-sided Recommender Systems) for taking into account the interests of both sides of the system, learning and adapting to those interests in real time, and resulting in more satisfactory recommended content. This established recommendation framework captures question characters and student dynamic preferences by considering the emergence of both sides of the system, and it yields a better learning experience in the context of practical quiz question generation.}
} 1102 1102 }
1103 1103
@article{CLEMENTE2022118171, 1104 1104 @article{CLEMENTE2022118171,
title = {A proposal for an adaptive Recommender System based on competences and ontologies}, 1105 1105 title = {A proposal for an adaptive Recommender System based on competences and ontologies},
journal = {Expert Systems with Applications}, 1106 1106 journal = {Expert Systems with Applications},
volume = {208}, 1107 1107 volume = {208},
pages = {118171}, 1108 1108 pages = {118171},
year = {2022}, 1109 1109 year = {2022},
issn = {0957-4174}, 1110 1110 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2022.118171}, 1111 1111 doi = {https://doi.org/10.1016/j.eswa.2022.118171},
url = {https://www.sciencedirect.com/science/article/pii/S0957417422013392}, 1112 1112 url = {https://www.sciencedirect.com/science/article/pii/S0957417422013392},
author = {Julia Clemente and Héctor Yago and Javier {de Pedro-Carracedo} and Javier Bueno}, 1113 1113 author = {Julia Clemente and Héctor Yago and Javier {de Pedro-Carracedo} and Javier Bueno},
keywords = {Recommender system, , Ontology network, Methodological development, Student modeling}, 1114 1114 keywords = {Recommender system, , Ontology network, Methodological development, Student modeling},
abstract = {Context: 1115 1115 abstract = {Context:
Competences represent an interesting pedagogical support in many processes like diagnosis or recommendation. From these, it is possible to infer information about the progress of the student to provide help targeted both, trainers who must make adaptive tutoring decisions for each learner, and students to detect and correct their learning weaknesses. For the correct development of any of these tasks, it is important to have a suitable student model that allows the representation of the most significant information possible about the student. Additionally, it would be very advantageous for this modeling to incorporate mechanisms from which it would be possible to infer more information about the student’s state of knowledge. 1116 1116 Competences represent an interesting pedagogical support in many processes like diagnosis or recommendation. From these, it is possible to infer information about the progress of the student to provide help targeted both, trainers who must make adaptive tutoring decisions for each learner, and students to detect and correct their learning weaknesses. For the correct development of any of these tasks, it is important to have a suitable student model that allows the representation of the most significant information possible about the student. Additionally, it would be very advantageous for this modeling to incorporate mechanisms from which it would be possible to infer more information about the student’s state of knowledge.
Objective: 1117 1117 Objective:
To facilitate this goal, in this paper a new approach to develop an adaptive competence-based recommender system is proposed. 1118 1118 To facilitate this goal, in this paper a new approach to develop an adaptive competence-based recommender system is proposed.
Method: 1119 1119 Method:
We present a methodological development guide as well as a set of ontological and non-ontological resources to develop and adapt the prototype of the proposed recommender system. 1120 1120 We present a methodological development guide as well as a set of ontological and non-ontological resources to develop and adapt the prototype of the proposed recommender system.
Results: 1121 1121 Results:
A modular flexible ontology network previously built for this purpose has been extended, which is responsible for recording the instructional design and student information. Furthermore, we describe a case study based on a first aid learning experience to assess the prototype with the proposed methodology. 1122 1122 A modular flexible ontology network previously built for this purpose has been extended, which is responsible for recording the instructional design and student information. Furthermore, we describe a case study based on a first aid learning experience to assess the prototype with the proposed methodology.
Conclusions: 1123 1123 Conclusions:
We highlight the relevance of flexibility and adaptability in learning modeling and recommendation processes. In order to promote improvement in the personalized learning of students, we present a Recommender System prototype taking advantages of ontologies, with a methodological guide, a broad taxonomy of recommendation criteria and the nature of competences. Future lines of research lines, including a more comprehensive evaluation of the system, will allow us to demonstrate in depth its adaptability according to the characteristics of the student, flexibility and extensibility for its integration in various environments and domains.} 1124 1124 We highlight the relevance of flexibility and adaptability in learning modeling and recommendation processes. In order to promote improvement in the personalized learning of students, we present a Recommender System prototype taking advantages of ontologies, with a methodological guide, a broad taxonomy of recommendation criteria and the nature of competences. Future lines of research lines, including a more comprehensive evaluation of the system, will allow us to demonstrate in depth its adaptability according to the characteristics of the student, flexibility and extensibility for its integration in various environments and domains.}
} 1125 1125 }
1126 1126
@article{https://doi.org/10.1155/2023/2578286, 1127 1127 @article{https://doi.org/10.1155/2023/2578286,
author = {Li, Linqing and Wang, Zhifeng}, 1128 1128 author = {Li, Linqing and Wang, Zhifeng},
title = {Knowledge Graph-Enhanced Intelligent Tutoring System Based on Exercise Representativeness and Informativeness}, 1129 1129 title = {Knowledge Graph-Enhanced Intelligent Tutoring System Based on Exercise Representativeness and Informativeness},
journal = {International Journal of Intelligent Systems}, 1130 1130 journal = {International Journal of Intelligent Systems},
volume = {2023}, 1131 1131 volume = {2023},
number = {1}, 1132 1132 number = {1},
pages = {2578286}, 1133 1133 pages = {2578286},
doi = {https://doi.org/10.1155/2023/2578286}, 1134 1134 doi = {https://doi.org/10.1155/2023/2578286},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2023/2578286}, 1135 1135 url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2023/2578286},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2023/2578286}, 1136 1136 eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2023/2578286},
abstract = {In the realm of online tutoring intelligent systems, e-learners are exposed to a substantial volume of learning content. The extraction and organization of exercises and skills hold significant importance in establishing clear learning objectives and providing appropriate exercise recommendations. Presently, knowledge graph-based recommendation algorithms have garnered considerable attention among researchers. However, these algorithms solely consider knowledge graphs with single relationships and do not effectively model exercise-rich features, such as exercise representativeness and informativeness. Consequently, this paper proposes a framework, namely, the Knowledge Graph Importance-Exercise Representativeness and Informativeness Framework, to address these two issues. The framework consists of four intricate components and a novel cognitive diagnosis model called the Neural Attentive Cognitive Diagnosis model to recommend the proper exercises. These components encompass the informativeness component, exercise representation component, knowledge importance component, and exercise representativeness component. The informativeness component evaluates the informational value of each exercise and identifies the candidate exercise set (EC) that exhibits the highest exercise informativeness. Moreover, the exercise representation component utilizes a graph neural network to process student records. The output of the graph neural network serves as the input for exercise-level attention and skill-level attention, ultimately generating exercise embeddings and skill embeddings. Furthermore, the skill embeddings are employed as input for the knowledge importance component. This component transforms a one-dimensional knowledge graph into a multidimensional one through four class relations and calculates skill importance weights based on novelty and popularity. Subsequently, the exercise representativeness component incorporates exercise weight knowledge coverage to select exercises from the candidate exercise set for the tested exercise set. Lastly, the cognitive diagnosis model leverages exercise representation and skill importance weights to predict student performance on the test set and estimate their knowledge state. To evaluate the effectiveness of our selection strategy, extensive experiments were conducted on two types of publicly available educational datasets. The experimental results demonstrate that our framework can recommend appropriate exercises to students, leading to improved student performance.}, 1137 1137 abstract = {In the realm of online tutoring intelligent systems, e-learners are exposed to a substantial volume of learning content. The extraction and organization of exercises and skills hold significant importance in establishing clear learning objectives and providing appropriate exercise recommendations. Presently, knowledge graph-based recommendation algorithms have garnered considerable attention among researchers. However, these algorithms solely consider knowledge graphs with single relationships and do not effectively model exercise-rich features, such as exercise representativeness and informativeness. Consequently, this paper proposes a framework, namely, the Knowledge Graph Importance-Exercise Representativeness and Informativeness Framework, to address these two issues. The framework consists of four intricate components and a novel cognitive diagnosis model called the Neural Attentive Cognitive Diagnosis model to recommend the proper exercises. These components encompass the informativeness component, exercise representation component, knowledge importance component, and exercise representativeness component. The informativeness component evaluates the informational value of each exercise and identifies the candidate exercise set (EC) that exhibits the highest exercise informativeness. Moreover, the exercise representation component utilizes a graph neural network to process student records. The output of the graph neural network serves as the input for exercise-level attention and skill-level attention, ultimately generating exercise embeddings and skill embeddings. Furthermore, the skill embeddings are employed as input for the knowledge importance component. This component transforms a one-dimensional knowledge graph into a multidimensional one through four class relations and calculates skill importance weights based on novelty and popularity. Subsequently, the exercise representativeness component incorporates exercise weight knowledge coverage to select exercises from the candidate exercise set for the tested exercise set. Lastly, the cognitive diagnosis model leverages exercise representation and skill importance weights to predict student performance on the test set and estimate their knowledge state. To evaluate the effectiveness of our selection strategy, extensive experiments were conducted on two types of publicly available educational datasets. The experimental results demonstrate that our framework can recommend appropriate exercises to students, leading to improved student performance.},
year = {2023} 1138 1138 year = {2023}
} 1139 1139 }
1140 1140
@inproceedings{badier:hal-04092828, 1141 1141 @inproceedings{badier:hal-04092828,
TITLE = {{Comprendre les usages et effets d'un syst{\`e}me de recommandations p{\'e}dagogiques en contexte d'apprentissage non-formel}}, 1142 1142 TITLE = {{Comprendre les usages et effets d'un syst{\`e}me de recommandations p{\'e}dagogiques en contexte d'apprentissage non-formel}},
AUTHOR = {Badier, Ana{\"e}lle and Lefort, Mathieu and Lefevre, Marie}, 1143 1143 AUTHOR = {Badier, Ana{\"e}lle and Lefort, Mathieu and Lefevre, Marie},
URL = {https://hal.science/hal-04092828}, 1144 1144 URL = {https://hal.science/hal-04092828},
BOOKTITLE = {{EIAH'23}}, 1145 1145 BOOKTITLE = {{EIAH'23}},
ADDRESS = {Brest, France}, 1146 1146 ADDRESS = {Brest, France},
YEAR = {2023}, 1147 1147 YEAR = {2023},
MONTH = Jun, 1148 1148 MONTH = Jun,
HAL_ID = {hal-04092828}, 1149 1149 HAL_ID = {hal-04092828},
HAL_VERSION = {v1}, 1150 1150 HAL_VERSION = {v1},
} 1151 1151 }
1152 1152
@article{BADRA2023108920, 1153 1153 @article{BADRA2023108920,
title = {Case-based prediction – A survey}, 1154 1154 title = {Case-based prediction – A survey},
journal = {International Journal of Approximate Reasoning}, 1155 1155 journal = {International Journal of Approximate Reasoning},
volume = {158}, 1156 1156 volume = {158},
pages = {108920}, 1157 1157 pages = {108920},
year = {2023}, 1158 1158 year = {2023},
issn = {0888-613X}, 1159 1159 issn = {0888-613X},
doi = {https://doi.org/10.1016/j.ijar.2023.108920}, 1160 1160 doi = {https://doi.org/10.1016/j.ijar.2023.108920},
url = {https://www.sciencedirect.com/science/article/pii/S0888613X23000440}, 1161 1161 url = {https://www.sciencedirect.com/science/article/pii/S0888613X23000440},
author = {Fadi Badra and Marie-Jeanne Lesot}, 1162 1162 author = {Fadi Badra and Marie-Jeanne Lesot},
keywords = {Case-based prediction, Analogical transfer, Similarity}, 1163 1163 keywords = {Case-based prediction, Analogical transfer, Similarity},
abstract = {This paper clarifies the relation between case-based prediction and analogical transfer. Case-based prediction consists in predicting the outcome associated with a new case directly from its comparison with a set of cases retrieved from a case base, by relying solely on a structured memory and some similarity measures. Analogical transfer is a cognitive process that allows to derive some new information about a target situation by applying a plausible inference principle, according to which if two situations are similar with respect to some criteria, then it is plausible that they are also similar with respect to other criteria. Case-based prediction algorithms are known to apply analogical transfer to make predictions, but the existing approaches are diverse, and developing a unified theory of case-based prediction remains a challenge. In this paper, we show that a common principle underlying case-based prediction methods is that they interpret the plausible inference as a transfer of similarity knowledge from a situation space to an outcome space. Among all potential outcomes, the predicted outcome is the one that optimizes this transfer, i.e., that makes the similarities in the outcome space most compatible with the observed similarities in the situation space. Based on this observation, a systematic analysis of the different theories of case-based prediction is presented, where the approaches are distinguished according to the type of knowledge used to measure the compatibility between the two sets of similarity relations.} 1164 1164 abstract = {This paper clarifies the relation between case-based prediction and analogical transfer. Case-based prediction consists in predicting the outcome associated with a new case directly from its comparison with a set of cases retrieved from a case base, by relying solely on a structured memory and some similarity measures. Analogical transfer is a cognitive process that allows to derive some new information about a target situation by applying a plausible inference principle, according to which if two situations are similar with respect to some criteria, then it is plausible that they are also similar with respect to other criteria. Case-based prediction algorithms are known to apply analogical transfer to make predictions, but the existing approaches are diverse, and developing a unified theory of case-based prediction remains a challenge. In this paper, we show that a common principle underlying case-based prediction methods is that they interpret the plausible inference as a transfer of similarity knowledge from a situation space to an outcome space. Among all potential outcomes, the predicted outcome is the one that optimizes this transfer, i.e., that makes the similarities in the outcome space most compatible with the observed similarities in the situation space. Based on this observation, a systematic analysis of the different theories of case-based prediction is presented, where the approaches are distinguished according to the type of knowledge used to measure the compatibility between the two sets of similarity relations.}
} 1165 1165 }
1166 1166
1167 1167
@Article{jmse11050890 , 1168 1168 @Article{jmse11050890 ,
AUTHOR = {Louvros, Panagiotis and Stefanidis, Fotios and Boulougouris, Evangelos and Komianos, Alexandros and Vassalos, Dracos}, 1169 1169 AUTHOR = {Louvros, Panagiotis and Stefanidis, Fotios and Boulougouris, Evangelos and Komianos, Alexandros and Vassalos, Dracos},
TITLE = {Machine Learning and Case-Based Reasoning for Real-Time Onboard Prediction of the Survivability of Ships}, 1170 1170 TITLE = {Machine Learning and Case-Based Reasoning for Real-Time Onboard Prediction of the Survivability of Ships},
JOURNAL = {Journal of Marine Science and Engineering}, 1171 1171 JOURNAL = {Journal of Marine Science and Engineering},
VOLUME = {11}, 1172 1172 VOLUME = {11},
YEAR = {2023}, 1173 1173 YEAR = {2023},
NUMBER = {5}, 1174 1174 NUMBER = {5},
ARTICLE-NUMBER = {890}, 1175 1175 ARTICLE-NUMBER = {890},
URL = {https://www.mdpi.com/2077-1312/11/5/890}, 1176 1176 URL = {https://www.mdpi.com/2077-1312/11/5/890},
ISSN = {2077-1312}, 1177 1177 ISSN = {2077-1312},
ABSTRACT = {The subject of damaged stability has greatly profited from the development of new tools and techniques in recent history. Specifically, the increased computational power and the probabilistic approach have transformed the subject, increasing accuracy and fidelity, hence allowing for a universal application and the inclusion of the most probable scenarios. Currently, all ships are evaluated for their stability and are expected to survive the dangers they will most likely face. However, further advancements in simulations have made it possible to further increase the fidelity and accuracy of simulated casualties. Multiple time domain and, to a lesser extent, Computational Fluid dynamics (CFD) solutions have been suggested as the next “evolutionary” step for damage stability. However, while those techniques are demonstrably more accurate, the computational power to utilize them for the task of probabilistic evaluation is not there yet. In this paper, the authors present a novel approach that aims to serve as a stopgap measure for introducing the time domain simulations in the existing framework. Specifically, the methodology presented serves the purpose of a fast decision support tool which is able to provide information regarding the ongoing casualty utilizing prior knowledge gained from simulations. This work was needed and developed for the purposes of the EU-funded project SafePASS.}, 1178 1178 ABSTRACT = {The subject of damaged stability has greatly profited from the development of new tools and techniques in recent history. Specifically, the increased computational power and the probabilistic approach have transformed the subject, increasing accuracy and fidelity, hence allowing for a universal application and the inclusion of the most probable scenarios. Currently, all ships are evaluated for their stability and are expected to survive the dangers they will most likely face. However, further advancements in simulations have made it possible to further increase the fidelity and accuracy of simulated casualties. Multiple time domain and, to a lesser extent, Computational Fluid dynamics (CFD) solutions have been suggested as the next “evolutionary” step for damage stability. However, while those techniques are demonstrably more accurate, the computational power to utilize them for the task of probabilistic evaluation is not there yet. In this paper, the authors present a novel approach that aims to serve as a stopgap measure for introducing the time domain simulations in the existing framework. Specifically, the methodology presented serves the purpose of a fast decision support tool which is able to provide information regarding the ongoing casualty utilizing prior knowledge gained from simulations. This work was needed and developed for the purposes of the EU-funded project SafePASS.},
DOI = {10.3390/jmse11050890} 1179 1179 DOI = {10.3390/jmse11050890}
} 1180 1180 }
1181 1181
1182 1182
@Article{su14031366, 1183 1183 @Article{su14031366,
AUTHOR = {Chun, Se-Hak and Jang, Jae-Won}, 1184 1184 AUTHOR = {Chun, Se-Hak and Jang, Jae-Won},
TITLE = {A New Trend Pattern-Matching Method of Interactive Case-Based Reasoning for Stock Price Predictions}, 1185 1185 TITLE = {A New Trend Pattern-Matching Method of Interactive Case-Based Reasoning for Stock Price Predictions},
JOURNAL = {Sustainability}, 1186 1186 JOURNAL = {Sustainability},
VOLUME = {14}, 1187 1187 VOLUME = {14},
YEAR = {2022}, 1188 1188 YEAR = {2022},
NUMBER = {3}, 1189 1189 NUMBER = {3},
ARTICLE-NUMBER = {1366}, 1190 1190 ARTICLE-NUMBER = {1366},
URL = {https://www.mdpi.com/2071-1050/14/3/1366}, 1191 1191 URL = {https://www.mdpi.com/2071-1050/14/3/1366},
ISSN = {2071-1050}, 1192 1192 ISSN = {2071-1050},
ABSTRACT = {In this paper, we suggest a new case-based reasoning method for stock price predictions using the knowledge of traders to select similar past patterns among nearest neighbors obtained from a traditional case-based reasoning machine. Thus, this method overcomes the limitation of conventional case-based reasoning, which does not consider how to retrieve similar neighbors from previous patterns in terms of a graphical pattern. In this paper, we show how the proposed method can be used when traders find similar time series patterns among nearest cases. For this, we suggest an interactive prediction system where traders can select similar patterns with individual knowledge among automatically recommended neighbors by case-based reasoning. In this paper, we demonstrate how traders can use their knowledge to select similar patterns using a graphical interface, serving as an exemplar for the target. These concepts are investigated against the backdrop of a practical application involving the prediction of three individual stock prices, i.e., Zoom, Airbnb, and Twitter, as well as the prediction of the Dow Jones Industrial Average (DJIA). The verification of the prediction results is compared with a random walk model based on the RMSE and Hit ratio. The results show that the proposed technique is more effective than the random walk model but it does not statistically surpass the random walk model.}, 1193 1193 ABSTRACT = {In this paper, we suggest a new case-based reasoning method for stock price predictions using the knowledge of traders to select similar past patterns among nearest neighbors obtained from a traditional case-based reasoning machine. Thus, this method overcomes the limitation of conventional case-based reasoning, which does not consider how to retrieve similar neighbors from previous patterns in terms of a graphical pattern. In this paper, we show how the proposed method can be used when traders find similar time series patterns among nearest cases. For this, we suggest an interactive prediction system where traders can select similar patterns with individual knowledge among automatically recommended neighbors by case-based reasoning. In this paper, we demonstrate how traders can use their knowledge to select similar patterns using a graphical interface, serving as an exemplar for the target. These concepts are investigated against the backdrop of a practical application involving the prediction of three individual stock prices, i.e., Zoom, Airbnb, and Twitter, as well as the prediction of the Dow Jones Industrial Average (DJIA). The verification of the prediction results is compared with a random walk model based on the RMSE and Hit ratio. The results show that the proposed technique is more effective than the random walk model but it does not statistically surpass the random walk model.},
DOI = {10.3390/su14031366} 1194 1194 DOI = {10.3390/su14031366}
} 1195 1195 }
1196 1196
@Article{fire7040107, 1197 1197 @Article{fire7040107,
AUTHOR = {Pei, Qiuyan and Jia, Zhichao and Liu, Jia and Wang, Yi and Wang, Junhui and Zhang, Yanqi}, 1198 1198 AUTHOR = {Pei, Qiuyan and Jia, Zhichao and Liu, Jia and Wang, Yi and Wang, Junhui and Zhang, Yanqi},
TITLE = {Prediction of Coal Spontaneous Combustion Hazard Grades Based on Fuzzy Clustered Case-Based Reasoning}, 1199 1199 TITLE = {Prediction of Coal Spontaneous Combustion Hazard Grades Based on Fuzzy Clustered Case-Based Reasoning},
JOURNAL = {Fire}, 1200 1200 JOURNAL = {Fire},
VOLUME = {7}, 1201 1201 VOLUME = {7},
YEAR = {2024}, 1202 1202 YEAR = {2024},
NUMBER = {4}, 1203 1203 NUMBER = {4},
ARTICLE-NUMBER = {107}, 1204 1204 ARTICLE-NUMBER = {107},
URL = {https://www.mdpi.com/2571-6255/7/4/107}, 1205 1205 URL = {https://www.mdpi.com/2571-6255/7/4/107},
ISSN = {2571-6255}, 1206 1206 ISSN = {2571-6255},
ABSTRACT = {Accurate prediction of the coal spontaneous combustion hazard grades is of great significance to ensure the safe production of coal mines. However, traditional coal temperature prediction models have low accuracy and do not predict the coal spontaneous combustion hazard grades. In order to accurately predict coal spontaneous combustion hazard grades, a prediction model of coal spontaneous combustion based on principal component analysis (PCA), case-based reasoning (CBR), fuzzy clustering (FM), and the snake optimization (SO) algorithm was proposed in this manuscript. Firstly, based on the change rule of the concentration of signature gases in the process of coal warming, a new method of classifying the risk of spontaneous combustion of coal was established. Secondly, MeanRadius-SMOTE was adopted to balance the data structure. The weights of the prediction indicators were calculated through PCA to enhance the prediction precision of the CBR model. Then, by employing FM in the case base, the computational cost of CBR was reduced and its computational efficiency was improved. The SO algorithm was used to determine the hyperparameters in the PCA-FM-CBR model. In addition, multiple comparative experiments were conducted to verify the superiority of the model proposed in this manuscript. The results indicated that SO-PCA-FM-CBR possesses good prediction performance and also improves computational efficiency. Finally, the authors of this manuscript adopted the Random Balance Designs—Fourier Amplitude Sensitivity Test (RBD-FAST) to explain the output of the model and analyzed the global importance of input variables. The results demonstrated that CO is the most important variable affecting the coal spontaneous combustion hazard grades.}, 1207 1207 ABSTRACT = {Accurate prediction of the coal spontaneous combustion hazard grades is of great significance to ensure the safe production of coal mines. However, traditional coal temperature prediction models have low accuracy and do not predict the coal spontaneous combustion hazard grades. In order to accurately predict coal spontaneous combustion hazard grades, a prediction model of coal spontaneous combustion based on principal component analysis (PCA), case-based reasoning (CBR), fuzzy clustering (FM), and the snake optimization (SO) algorithm was proposed in this manuscript. Firstly, based on the change rule of the concentration of signature gases in the process of coal warming, a new method of classifying the risk of spontaneous combustion of coal was established. Secondly, MeanRadius-SMOTE was adopted to balance the data structure. The weights of the prediction indicators were calculated through PCA to enhance the prediction precision of the CBR model. Then, by employing FM in the case base, the computational cost of CBR was reduced and its computational efficiency was improved. The SO algorithm was used to determine the hyperparameters in the PCA-FM-CBR model. In addition, multiple comparative experiments were conducted to verify the superiority of the model proposed in this manuscript. The results indicated that SO-PCA-FM-CBR possesses good prediction performance and also improves computational efficiency. Finally, the authors of this manuscript adopted the Random Balance Designs—Fourier Amplitude Sensitivity Test (RBD-FAST) to explain the output of the model and analyzed the global importance of input variables. The results demonstrated that CO is the most important variable affecting the coal spontaneous combustion hazard grades.},
DOI = {10.3390/fire7040107} 1208 1208 DOI = {10.3390/fire7040107}
} 1209 1209 }
1210 1210
@Article{Desmarais2012, 1211 1211 @Article{Desmarais2012,
author={Desmarais, Michel C. 1212 1212 author={Desmarais, Michel C.
and Baker, Ryan S. J. d.}, 1213 1213 and Baker, Ryan S. J. d.},
title={A review of recent advances in learner and skill modeling in intelligent learning environments}, 1214 1214 title={A review of recent advances in learner and skill modeling in intelligent learning environments},
journal={User Modeling and User-Adapted Interaction}, 1215 1215 journal={User Modeling and User-Adapted Interaction},
year={2012}, 1216 1216 year={2012},
month={Apr}, 1217 1217 month={Apr},
day={01}, 1218 1218 day={01},
volume={22}, 1219 1219 volume={22},
number={1}, 1220 1220 number={1},
pages={9-38}, 1221 1221 pages={9-38},
abstract={In recent years, learner models have emerged from the research laboratory and research classrooms into the wider world. Learner models are now embedded in real world applications which can claim to have thousands, or even hundreds of thousands, of users. Probabilistic models for skill assessment are playing a key role in these advanced learning environments. In this paper, we review the learner models that have played the largest roles in the success of these learning environments, and also the latest advances in the modeling and assessment of learner skills. We conclude by discussing related advancements in modeling other key constructs such as learner motivation, emotional and attentional state, meta-cognition and self-regulated learning, group learning, and the recent movement towards open and shared learner models.}, 1222 1222 abstract={In recent years, learner models have emerged from the research laboratory and research classrooms into the wider world. Learner models are now embedded in real world applications which can claim to have thousands, or even hundreds of thousands, of users. Probabilistic models for skill assessment are playing a key role in these advanced learning environments. In this paper, we review the learner models that have played the largest roles in the success of these learning environments, and also the latest advances in the modeling and assessment of learner skills. We conclude by discussing related advancements in modeling other key constructs such as learner motivation, emotional and attentional state, meta-cognition and self-regulated learning, group learning, and the recent movement towards open and shared learner models.},
issn={1573-1391}, 1223 1223 issn={1573-1391},
doi={10.1007/s11257-011-9106-8}, 1224 1224 doi={10.1007/s11257-011-9106-8},
url={https://doi.org/10.1007/s11257-011-9106-8} 1225 1225 url={https://doi.org/10.1007/s11257-011-9106-8}
} 1226 1226 }
1227 1227
@article{Eide, 1228 1228 @article{Eide,
title={Dynamic slate recommendation with gated recurrent units and Thompson sampling}, 1229 1229 title={Dynamic slate recommendation with gated recurrent units and Thompson sampling},
author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo}, 1230 1230 author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo},
language={English}, 1231 1231 language={English},
type={article}, 1232 1232 type={article},
volume = {36}, 1233 1233 volume = {36},
year = {2022}, 1234 1234 year = {2022},
issn = {1573-756X}, 1235 1235 issn = {1573-756X},
doi = {https://doi.org/10.1007/s10618-022-00849-w}, 1236 1236 doi = {https://doi.org/10.1007/s10618-022-00849-w},
url = {https://doi.org/10.1007/s10618-022-00849-w}, 1237 1237 url = {https://doi.org/10.1007/s10618-022-00849-w},
abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.} 1238 1238 abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.}
} 1239 1239 }
1240 1240
@InProceedings{10.1007/978-3-031-09680-8_14, 1241 1241 @InProceedings{10.1007/978-3-031-09680-8_14,
author={Sablayrolles, Louis 1242 1242 author={Sablayrolles, Louis
and Lefevre, Marie 1243 1243 and Lefevre, Marie
and Guin, Nathalie 1244 1244 and Guin, Nathalie
and Broisin, Julien}, 1245 1245 and Broisin, Julien},
editor={Crossley, Scott 1246 1246 editor={Crossley, Scott
and Popescu, Elvira}, 1247 1247 and Popescu, Elvira},
title={Design and Evaluation of a Competency-Based Recommendation Process}, 1248 1248 title={Design and Evaluation of a Competency-Based Recommendation Process},
booktitle={Intelligent Tutoring Systems}, 1249 1249 booktitle={Intelligent Tutoring Systems},
year={2022}, 1250 1250 year={2022},
publisher={Springer International Publishing}, 1251 1251 publisher={Springer International Publishing},
address={Cham}, 1252 1252 address={Cham},
pages={148--160}, 1253 1253 pages={148--160},
abstract={The purpose of recommending activities to learners is to provide them with resources adapted to their needs, to facilitate the learning process. However, when teachers face a large number of students, it is difficult for them to recommend a personalized list of resources to each learner. In this paper, we are interested in the design of a system that automatically recommends resources to learners using their cognitive profile expressed in terms of competencies, but also according to a specific strategy defined by teachers. Our contributions relate to (1) a competency-based pedagogical strategy allowing to express the teacher's expertise, and (2) a recommendation process based on this strategy. This process has been experimented and assessed with students learning Shell programming in a first-year computer science degree. The first results show that (i) the items selected by our system from the set of possible items were relevant according to the experts; (ii) our system provided recommendations in a reasonable time; (iii) the recommendations were consulted by the learners but lacked usability.}, 1254 1254 abstract={The purpose of recommending activities to learners is to provide them with resources adapted to their needs, to facilitate the learning process. However, when teachers face a large number of students, it is difficult for them to recommend a personalized list of resources to each learner. In this paper, we are interested in the design of a system that automatically recommends resources to learners using their cognitive profile expressed in terms of competencies, but also according to a specific strategy defined by teachers. Our contributions relate to (1) a competency-based pedagogical strategy allowing to express the teacher's expertise, and (2) a recommendation process based on this strategy. This process has been experimented and assessed with students learning Shell programming in a first-year computer science degree. The first results show that (i) the items selected by our system from the set of possible items were relevant according to the experts; (ii) our system provided recommendations in a reasonable time; (iii) the recommendations were consulted by the learners but lacked usability.},
isbn={978-3-031-09680-8} 1255 1255 isbn={978-3-031-09680-8}
} 1256 1256 }
1257 1257
@inproceedings{10.1145/3578337.3605122, 1258 1258 @inproceedings{10.1145/3578337.3605122,
author = {Xu, Shuyuan and Ge, Yingqiang and Li, Yunqi and Fu, Zuohui and Chen, Xu and Zhang, Yongfeng}, 1259 1259 author = {Xu, Shuyuan and Ge, Yingqiang and Li, Yunqi and Fu, Zuohui and Chen, Xu and Zhang, Yongfeng},
title = {Causal Collaborative Filtering}, 1260 1260 title = {Causal Collaborative Filtering},
year = {2023}, 1261 1261 year = {2023},
isbn = {9798400700736}, 1262 1262 isbn = {9798400700736},
publisher = {Association for Computing Machinery}, 1263 1263 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 1264 1264 address = {New York, NY, USA},
url = {https://doi.org/10.1145/3578337.3605122}, 1265 1265 url = {https://doi.org/10.1145/3578337.3605122},
doi = {10.1145/3578337.3605122}, 1266 1266 doi = {10.1145/3578337.3605122},
abstract = {Many of the traditional recommendation algorithms are designed based on the fundamental idea of mining or learning correlative patterns from data to estimate the user-item correlative preference. However, pure correlative learning may lead to Simpson's paradox in predictions, and thus results in sacrificed recommendation performance. Simpson's paradox is a well-known statistical phenomenon, which causes confusions in statistical conclusions and ignoring the paradox may result in inaccurate decisions. Fortunately, causal and counterfactual modeling can help us to think outside of the observational data for user modeling and personalization so as to tackle such issues. In this paper, we propose Causal Collaborative Filtering (CCF) --- a general framework for modeling causality in collaborative filtering and recommendation. We provide a unified causal view of CF and mathematically show that many of the traditional CF algorithms are actually special cases of CCF under simplified causal graphs. We then propose a conditional intervention approach for do-operations so that we can estimate the user-item causal preference based on the observational data. Finally, we further propose a general counterfactual constrained learning framework for estimating the user-item preferences. Experiments are conducted on two types of real-world datasets---traditional and randomized trial data---and results show that our framework can improve the recommendation performance and reduce the Simpson's paradox problem of many CF algorithms.}, 1267 1267 abstract = {Many of the traditional recommendation algorithms are designed based on the fundamental idea of mining or learning correlative patterns from data to estimate the user-item correlative preference. However, pure correlative learning may lead to Simpson's paradox in predictions, and thus results in sacrificed recommendation performance. Simpson's paradox is a well-known statistical phenomenon, which causes confusions in statistical conclusions and ignoring the paradox may result in inaccurate decisions. Fortunately, causal and counterfactual modeling can help us to think outside of the observational data for user modeling and personalization so as to tackle such issues. In this paper, we propose Causal Collaborative Filtering (CCF) --- a general framework for modeling causality in collaborative filtering and recommendation. We provide a unified causal view of CF and mathematically show that many of the traditional CF algorithms are actually special cases of CCF under simplified causal graphs. We then propose a conditional intervention approach for do-operations so that we can estimate the user-item causal preference based on the observational data. Finally, we further propose a general counterfactual constrained learning framework for estimating the user-item preferences. Experiments are conducted on two types of real-world datasets---traditional and randomized trial data---and results show that our framework can improve the recommendation performance and reduce the Simpson's paradox problem of many CF algorithms.},
booktitle = {Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval}, 1268 1268 booktitle = {Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval},
pages = {235–245}, 1269 1269 pages = {235–245},
numpages = {11}, 1270 1270 numpages = {11},
keywords = {recommender systems, counterfactual reasoning, collaborative filtering, causal analysis, Simpson's paradox}, 1271 1271 keywords = {recommender systems, counterfactual reasoning, collaborative filtering, causal analysis, Simpson's paradox},
location = {Taipei, Taiwan}, 1272 1272 location = {Taipei, Taiwan},
series = {ICTIR '23} 1273 1273 series = {ICTIR '23}
} 1274 1274 }
1275 1275
@inproceedings{10.1145/3583780.3615048, 1276 1276 @inproceedings{10.1145/3583780.3615048,
author = {Zhu, Zheqing and Van Roy, Benjamin}, 1277 1277 author = {Zhu, Zheqing and Van Roy, Benjamin},
title = {Scalable Neural Contextual Bandit for Recommender Systems}, 1278 1278 title = {Scalable Neural Contextual Bandit for Recommender Systems},
year = {2023}, 1279 1279 year = {2023},
isbn = {9798400701245}, 1280 1280 isbn = {9798400701245},
publisher = {Association for Computing Machinery}, 1281 1281 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 1282 1282 address = {New York, NY, USA},
url = {https://doi.org/10.1145/3583780.3615048}, 1283 1283 url = {https://doi.org/10.1145/3583780.3615048},
doi = {10.1145/3583780.3615048}, 1284 1284 doi = {10.1145/3583780.3615048},
abstract = {High-quality recommender systems ought to deliver both innovative and relevant content through effective and exploratory interactions with users. Yet, supervised learning-based neural networks, which form the backbone of many existing recommender systems, only leverage recognized user interests, falling short when it comes to efficiently uncovering unknown user preferences. While there has been some progress with neural contextual bandit algorithms towards enabling online exploration through neural networks, their onerous computational demands hinder widespread adoption in real-world recommender systems. In this work, we propose a scalable sample-efficient neural contextual bandit algorithm for recommender systems. To do this, we design an epistemic neural network architecture, Epistemic Neural Recommendation (ENR), that enables Thompson sampling at a large scale. In two distinct large-scale experiments with real-world tasks, ENR significantly boosts click-through rates and user ratings by at least 9\% and 6\% respectively compared to state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves equivalent performance with at least 29\% fewer user interactions compared to the best-performing baseline algorithm. Remarkably, while accomplishing these improvements, ENR demands orders of magnitude fewer computational resources than neural contextual bandit baseline algorithms.}, 1285 1285 abstract = {High-quality recommender systems ought to deliver both innovative and relevant content through effective and exploratory interactions with users. Yet, supervised learning-based neural networks, which form the backbone of many existing recommender systems, only leverage recognized user interests, falling short when it comes to efficiently uncovering unknown user preferences. While there has been some progress with neural contextual bandit algorithms towards enabling online exploration through neural networks, their onerous computational demands hinder widespread adoption in real-world recommender systems. In this work, we propose a scalable sample-efficient neural contextual bandit algorithm for recommender systems. To do this, we design an epistemic neural network architecture, Epistemic Neural Recommendation (ENR), that enables Thompson sampling at a large scale. In two distinct large-scale experiments with real-world tasks, ENR significantly boosts click-through rates and user ratings by at least 9\% and 6\% respectively compared to state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves equivalent performance with at least 29\% fewer user interactions compared to the best-performing baseline algorithm. Remarkably, while accomplishing these improvements, ENR demands orders of magnitude fewer computational resources than neural contextual bandit baseline algorithms.},
booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management}, 1286 1286 booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management},
pages = {3636–3646}, 1287 1287 pages = {3636–3646},
numpages = {11}, 1288 1288 numpages = {11},
keywords = {contextual bandits, decision making under uncertainty, exploration vs exploitation, recommender systems, reinforcement learning}, 1289 1289 keywords = {contextual bandits, decision making under uncertainty, exploration vs exploitation, recommender systems, reinforcement learning},
location = {Birmingham, United Kingdom}, 1290 1290 location = {Birmingham, United Kingdom},
series = {CIKM '23} 1291 1291 series = {CIKM '23}
} 1292 1292 }
1293 1293
@ARTICLE{10494875, 1294 1294 @ARTICLE{10494875,
author={Ghoorchian, Saeed and Kortukov, Evgenii and Maghsudi, Setareh}, 1295 1295 author={Ghoorchian, Saeed and Kortukov, Evgenii and Maghsudi, Setareh},
journal={IEEE Open Journal of Signal Processing}, 1296 1296 journal={IEEE Open Journal of Signal Processing},
title={Non-Stationary Linear Bandits With Dimensionality Reduction for Large-Scale Recommender Systems}, 1297 1297 title={Non-Stationary Linear Bandits With Dimensionality Reduction for Large-Scale Recommender Systems},
year={2024}, 1298 1298 year={2024},
volume={5}, 1299 1299 volume={5},
number={}, 1300 1300 number={},
pages={548-558}, 1301 1301 pages={548-558},
keywords={Vectors;Recommender systems;Decision making;Runtime;Signal processing algorithms;Covariance matrices;Robustness;Decision-making;multi-armed bandit;non-stationary environment;online learning;recommender systems}, 1302 1302 keywords={Vectors;Recommender systems;Decision making;Runtime;Signal processing algorithms;Covariance matrices;Robustness;Decision-making;multi-armed bandit;non-stationary environment;online learning;recommender systems},
doi={10.1109/OJSP.2024.3386490} 1303 1303 doi={10.1109/OJSP.2024.3386490}
} 1304 1304 }
1305 1305
@article{GIANNIKIS2024111752, 1306 1306 @article{GIANNIKIS2024111752,
title = {Reinforcement learning for addressing the cold-user problem in recommender systems}, 1307 1307 title = {Reinforcement learning for addressing the cold-user problem in recommender systems},
journal = {Knowledge-Based Systems}, 1308 1308 journal = {Knowledge-Based Systems},
volume = {294}, 1309 1309 volume = {294},
pages = {111752}, 1310 1310 pages = {111752},
year = {2024}, 1311 1311 year = {2024},
issn = {0950-7051}, 1312 1312 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2024.111752}, 1313 1313 doi = {https://doi.org/10.1016/j.knosys.2024.111752},
url = {https://www.sciencedirect.com/science/article/pii/S0950705124003873}, 1314 1314 url = {https://www.sciencedirect.com/science/article/pii/S0950705124003873},
author = {Stelios Giannikis and Flavius Frasincar and David Boekestijn}, 1315 1315 author = {Stelios Giannikis and Flavius Frasincar and David Boekestijn},
keywords = {Recommender systems, Reinforcement learning, Active learning, Cold-user problem}, 1316 1316 keywords = {Recommender systems, Reinforcement learning, Active learning, Cold-user problem},
abstract = {Recommender systems are widely used in webshops because of their ability to provide users with personalized recommendations. However, the cold-user problem (i.e., recommending items to new users) is an important issue many webshops face. With the recent General Data Protection Regulation in Europe, the use of additional user information such as demographics is not possible without the user’s explicit consent. Several techniques have been proposed to solve the cold-user problem. Many of these techniques utilize Active Learning (AL) methods, which let cold users rate items to provide better recommendations for them. In this research, we propose two novel approaches that combine reinforcement learning with AL to elicit the users’ preferences and provide them with personalized recommendations. We compare reinforcement learning approaches that are either AL-based or item-based, where the latter predicts users’ ratings of an item by using their ratings of similar items. Differently than many of the existing approaches, this comparison is made based on implicit user information. Using a large real-world dataset, we show that the item-based strategy is more accurate than the AL-based strategy as well as several existing AL strategies.} 1317 1317 abstract = {Recommender systems are widely used in webshops because of their ability to provide users with personalized recommendations. However, the cold-user problem (i.e., recommending items to new users) is an important issue many webshops face. With the recent General Data Protection Regulation in Europe, the use of additional user information such as demographics is not possible without the user’s explicit consent. Several techniques have been proposed to solve the cold-user problem. Many of these techniques utilize Active Learning (AL) methods, which let cold users rate items to provide better recommendations for them. In this research, we propose two novel approaches that combine reinforcement learning with AL to elicit the users’ preferences and provide them with personalized recommendations. We compare reinforcement learning approaches that are either AL-based or item-based, where the latter predicts users’ ratings of an item by using their ratings of similar items. Differently than many of the existing approaches, this comparison is made based on implicit user information. Using a large real-world dataset, we show that the item-based strategy is more accurate than the AL-based strategy as well as several existing AL strategies.}
} 1318 1318 }
1319 1319
@article{IFTIKHAR2024121541, 1320 1320 @article{IFTIKHAR2024121541,
title = {A reinforcement learning recommender system using bi-clustering and Markov Decision Process}, 1321 1321 title = {A reinforcement learning recommender system using bi-clustering and Markov Decision Process},
journal = {Expert Systems with Applications}, 1322 1322 journal = {Expert Systems with Applications},
volume = {237}, 1323 1323 volume = {237},
pages = {121541}, 1324 1324 pages = {121541},
year = {2024}, 1325 1325 year = {2024},
issn = {0957-4174}, 1326 1326 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2023.121541}, 1327 1327 doi = {https://doi.org/10.1016/j.eswa.2023.121541},
url = {https://www.sciencedirect.com/science/article/pii/S0957417423020432}, 1328 1328 url = {https://www.sciencedirect.com/science/article/pii/S0957417423020432},
author = {Arta Iftikhar and Mustansar Ali Ghazanfar and Mubbashir Ayub and Saad {Ali Alahmari} and Nadeem Qazi and Julie Wall}, 1329 1329 author = {Arta Iftikhar and Mustansar Ali Ghazanfar and Mubbashir Ayub and Saad {Ali Alahmari} and Nadeem Qazi and Julie Wall},
keywords = {Reinforcement learning, Markov Decision Process, Bi-clustering, Q-learning, Policy}, 1330 1330 keywords = {Reinforcement learning, Markov Decision Process, Bi-clustering, Q-learning, Policy},
abstract = {Collaborative filtering (CF) recommender systems are static in nature and does not adapt well with changing user preferences. User preferences may change after interaction with a system or after buying a product. Conventional CF clustering algorithms only identifies the distribution of patterns and hidden correlations globally. However, the impossibility of discovering local patterns by these algorithms, headed to the popularization of bi-clustering algorithms. Bi-clustering algorithms can analyze all dataset dimensions simultaneously and consequently, discover local patterns that deliver a better understanding of the underlying hidden correlations. In this paper, we modelled the recommendation problem as a sequential decision-making problem using Markov Decision Processes (MDP). To perform state representation for MDP, we first converted user-item votings matrix to a binary matrix. Then we performed bi-clustering on this binary matrix to determine a subset of similar rows and columns. A bi-cluster merging algorithm is designed to merge similar and overlapping bi-clusters. These bi-clusters are then mapped to a squared grid (SG). RL is applied on this SG to determine best policy to give recommendation to users. Start state is determined using Improved Triangle Similarity (ITR similarity measure. Reward function is computed as grid state overlapping in terms of users and items in current and prospective next state. A thorough comparative analysis was conducted, encompassing a diverse array of methodologies, including RL-based, pure Collaborative Filtering (CF), and clustering methods. The results demonstrate that our proposed method outperforms its competitors in terms of precision, recall, and optimal policy learning.} 1331 1331 abstract = {Collaborative filtering (CF) recommender systems are static in nature and does not adapt well with changing user preferences. User preferences may change after interaction with a system or after buying a product. Conventional CF clustering algorithms only identifies the distribution of patterns and hidden correlations globally. However, the impossibility of discovering local patterns by these algorithms, headed to the popularization of bi-clustering algorithms. Bi-clustering algorithms can analyze all dataset dimensions simultaneously and consequently, discover local patterns that deliver a better understanding of the underlying hidden correlations. In this paper, we modelled the recommendation problem as a sequential decision-making problem using Markov Decision Processes (MDP). To perform state representation for MDP, we first converted user-item votings matrix to a binary matrix. Then we performed bi-clustering on this binary matrix to determine a subset of similar rows and columns. A bi-cluster merging algorithm is designed to merge similar and overlapping bi-clusters. These bi-clusters are then mapped to a squared grid (SG). RL is applied on this SG to determine best policy to give recommendation to users. Start state is determined using Improved Triangle Similarity (ITR similarity measure. Reward function is computed as grid state overlapping in terms of users and items in current and prospective next state. A thorough comparative analysis was conducted, encompassing a diverse array of methodologies, including RL-based, pure Collaborative Filtering (CF), and clustering methods. The results demonstrate that our proposed method outperforms its competitors in terms of precision, recall, and optimal policy learning.}
} 1332 1332 }
1333 1333
@article{Soto2, 1334 1334 @article{Soto2,
author={Soto-Forero, Daniel and Ackermann, Simha and Betbeder, Marie-Laure and Henriet, Julien}, 1335 1335 author={Soto-Forero, Daniel and Ackermann, Simha and Betbeder, Marie-Laure and Henriet, Julien},
title={Automatic Real-Time Adaptation of Training Session Difficulty Using Rules and Reinforcement Learning in the AI-VT ITS}, 1336 1336 title={Automatic Real-Time Adaptation of Training Session Difficulty Using Rules and Reinforcement Learning in the AI-VT ITS},
journal = {International Journal of Modern Education and Computer Science(IJMECS)}, 1337 1337 journal = {International Journal of Modern Education and Computer Science(IJMECS)},
volume = {16}, 1338 1338 volume = {16},
pages = {56-71}, 1339 1339 pages = {56-71},
year = {2024}, 1340 1340 year = {2024},
issn = {2075-0161}, 1341 1341 issn = {2075-0161},
doi = { https://doi.org/10.5815/ijmecs.2024.03.05}, 1342 1342 doi = { https://doi.org/10.5815/ijmecs.2024.03.05},
url = {https://www.mecs-press.org/ijmecs/ijmecs-v16-n3/v16n3-5.html}, 1343 1343 url = {https://www.mecs-press.org/ijmecs/ijmecs-v16-n3/v16n3-5.html},
keywords={Real Time Adaptation, Intelligent Training System, Thompson Sampling, Case-Based Reasoning, Automatic Adaptation}, 1344 1344 keywords={Real Time Adaptation, Intelligent Training System, Thompson Sampling, Case-Based Reasoning, Automatic Adaptation},
abstract={Some of the most common and typical issues in the field of intelligent tutoring systems (ITS) are (i) the correct identification of learners’ difficulties in the learning process, (ii) the adaptation of content or presentation of the system according to the difficulties encountered, and (iii) the ability to adapt without initial data (cold-start). In some cases, the system tolerates modifications after the realization and assessment of competences. Other systems require complicated real-time adaptation since only a limited number of data can be captured. In that case, it must be analyzed properly and with a certain precision in order to obtain the appropriate adaptations. Generally, for the adaptation step, the ITS gathers common learners together and adapts their training similarly. Another type of adaptation is more personalized, but requires acquired or estimated information about each learner (previous grades, probability of success, etc.). Some of these parameters may be difficult to obtain, and others are imprecise and can lead to misleading adaptations. The adaptation using machine learning requires prior training with a lot of data. This article presents a model for the real time automatic adaptation of a predetermined session inside an ITS called AI-VT. This adaptation process is part of a case-based reasoning global model. The characteristics of the model proposed in this paper (i) require a limited number of data in order to generate a personalized adaptation, (ii) do not require training, (iii) are based on the correlation to complexity levels, and (iv) are able to adapt even at the cold-start stage. The proposed model is presented with two different configurations, deterministic and stochastic. The model has been tested with a database of 1000 learners, corresponding to different knowledge levels in three different scenarios. The results show the dynamic adaptation of the proposed model in both versions, with the adaptations obtained helping the system to evolve more rapidly and identify learner weaknesses in the different levels of complexity as well as the generation of pertinent recommendations in specific cases for each learner capacity.} 1345 1345 abstract={Some of the most common and typical issues in the field of intelligent tutoring systems (ITS) are (i) the correct identification of learners’ difficulties in the learning process, (ii) the adaptation of content or presentation of the system according to the difficulties encountered, and (iii) the ability to adapt without initial data (cold-start). In some cases, the system tolerates modifications after the realization and assessment of competences. Other systems require complicated real-time adaptation since only a limited number of data can be captured. In that case, it must be analyzed properly and with a certain precision in order to obtain the appropriate adaptations. Generally, for the adaptation step, the ITS gathers common learners together and adapts their training similarly. Another type of adaptation is more personalized, but requires acquired or estimated information about each learner (previous grades, probability of success, etc.). Some of these parameters may be difficult to obtain, and others are imprecise and can lead to misleading adaptations. The adaptation using machine learning requires prior training with a lot of data. This article presents a model for the real time automatic adaptation of a predetermined session inside an ITS called AI-VT. This adaptation process is part of a case-based reasoning global model. The characteristics of the model proposed in this paper (i) require a limited number of data in order to generate a personalized adaptation, (ii) do not require training, (iii) are based on the correlation to complexity levels, and (iv) are able to adapt even at the cold-start stage. The proposed model is presented with two different configurations, deterministic and stochastic. The model has been tested with a database of 1000 learners, corresponding to different knowledge levels in three different scenarios. The results show the dynamic adaptation of the proposed model in both versions, with the adaptations obtained helping the system to evolve more rapidly and identify learner weaknesses in the different levels of complexity as well as the generation of pertinent recommendations in specific cases for each learner capacity.}
} 1346 1346 }
1347 1347
@InProceedings{10.1007/978-3-031-63646-2_11 , 1348 1348 @InProceedings{10.1007/978-3-031-63646-2_11 ,
author={Soto-Forero, Daniel and Betbeder, Marie-Laure and Henriet, Julien}, 1349 1349 author={Soto-Forero, Daniel and Betbeder, Marie-Laure and Henriet, Julien},
editor={Recio-Garcia, Juan A. and Orozco-del-Castillo, Mauricio G. and Bridge, Derek}, 1350 1350 editor={Recio-Garcia, Juan A. and Orozco-del-Castillo, Mauricio G. and Bridge, Derek},
title={Ensemble Stacking Case-Based Reasoning for Regression}, 1351 1351 title={Ensemble Stacking Case-Based Reasoning for Regression},
booktitle={Case-Based Reasoning Research and Development}, 1352 1352 booktitle={Case-Based Reasoning Research and Development},
year={2024}, 1353 1353 year={2024},
publisher={Springer Nature Switzerland}, 1354 1354 publisher={Springer Nature Switzerland},
address={Cham}, 1355 1355 address={Cham},
pages={159--174}, 1356 1356 pages={159--174},
abstract={This paper presents a case-based reasoning algorithm with a two-stage iterative double stacking to find approximate solutions to one and multidimensional regression problems. This approach does not require training, so it can work with dynamic data at run time. The solutions are generated using stochastic algorithms in order to allow exploration of the solution space. The evaluation is performed by transforming the regression problem into an optimization problem with an associated objective function. The algorithm has been tested in comparison with nine classical regression algorithms on ten different regression databases extracted from the UCI site. The results show that the proposed algorithm generates solutions in most cases quite close to the real solutions. According to the RMSE, the proposed algorithm globally among the four best algorithms, according to MAE, to the fourth best algorithms of the ten evaluated, suggesting that the results are reasonably good.}, 1357 1357 abstract={This paper presents a case-based reasoning algorithm with a two-stage iterative double stacking to find approximate solutions to one and multidimensional regression problems. This approach does not require training, so it can work with dynamic data at run time. The solutions are generated using stochastic algorithms in order to allow exploration of the solution space. The evaluation is performed by transforming the regression problem into an optimization problem with an associated objective function. The algorithm has been tested in comparison with nine classical regression algorithms on ten different regression databases extracted from the UCI site. The results show that the proposed algorithm generates solutions in most cases quite close to the real solutions. According to the RMSE, the proposed algorithm globally among the four best algorithms, according to MAE, to the fourth best algorithms of the ten evaluated, suggesting that the results are reasonably good.},
isbn={978-3-031-63646-2} 1358 1358 isbn={978-3-031-63646-2}
} 1359 1359 }
1360 1360
@article{ZHANG2018189, 1361 1361 @article{ZHANG2018189,
title = {A three learning states Bayesian knowledge tracing model}, 1362 1362 title = {A three learning states Bayesian knowledge tracing model},
journal = {Knowledge-Based Systems}, 1363 1363 journal = {Knowledge-Based Systems},
volume = {148}, 1364 1364 volume = {148},
pages = {189-201}, 1365 1365 pages = {189-201},
year = {2018}, 1366 1366 year = {2018},
issn = {0950-7051}, 1367 1367 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2018.03.001}, 1368 1368 doi = {https://doi.org/10.1016/j.knosys.2018.03.001},
url = {https://www.sciencedirect.com/science/article/pii/S0950705118301199}, 1369 1369 url = {https://www.sciencedirect.com/science/article/pii/S0950705118301199},
author = {Kai Zhang and Yiyu Yao}, 1370 1370 author = {Kai Zhang and Yiyu Yao},
keywords = {Bayesian knowledge tracing, Three-way decisions}, 1371 1371 keywords = {Bayesian knowledge tracing, Three-way decisions},
abstract = {This paper proposes a Bayesian knowledge tracing model with three learning states by extending the original two learning states. We divide a learning process into three sections by using an evaluation function for three-way decisions. Advantages of such a trisection over traditional bisection are demonstrated by comparative experiments. We develop a three learning states model based on the trisection of the learning process. We apply the model to a series of comparative experiments with the original model. Qualitative and quantitative analyses of the experimental results indicate the superior performance of the proposed model over the original model in terms of prediction accuracies and related statistical measures.} 1372 1372 abstract = {This paper proposes a Bayesian knowledge tracing model with three learning states by extending the original two learning states. We divide a learning process into three sections by using an evaluation function for three-way decisions. Advantages of such a trisection over traditional bisection are demonstrated by comparative experiments. We develop a three learning states model based on the trisection of the learning process. We apply the model to a series of comparative experiments with the original model. Qualitative and quantitative analyses of the experimental results indicate the superior performance of the proposed model over the original model in terms of prediction accuracies and related statistical measures.}
} 1373 1373 }
1374 1374
@article{Li_2024, 1375 1375 @article{Li_2024,
doi = {10.3847/1538-4357/ad3215}, 1376 1376 doi = {10.3847/1538-4357/ad3215},
url = {https://dx.doi.org/10.3847/1538-4357/ad3215}, 1377 1377 url = {https://dx.doi.org/10.3847/1538-4357/ad3215},
year = {2024}, 1378 1378 year = {2024},
month = {apr}, 1379 1379 month = {apr},
publisher = {The American Astronomical Society}, 1380 1380 publisher = {The American Astronomical Society},
volume = {965}, 1381 1381 volume = {965},
number = {2}, 1382 1382 number = {2},
pages = {125}, 1383 1383 pages = {125},
author = {Zhigang Li and Zhejie Ding and Yu Yu and Pengjie Zhang}, 1384 1384 author = {Zhigang Li and Zhejie Ding and Yu Yu and Pengjie Zhang},
title = {The Kullback–Leibler Divergence and the Convergence Rate of Fast Covariance Matrix Estimators in Galaxy Clustering Analysis}, 1385 1385 title = {The Kullback–Leibler Divergence and the Convergence Rate of Fast Covariance Matrix Estimators in Galaxy Clustering Analysis},
journal = {The Astrophysical Journal}, 1386 1386 journal = {The Astrophysical Journal},
abstract = {We present a method to quantify the convergence rate of the fast estimators of the covariance matrices in the large-scale structure analysis. Our method is based on the Kullback–Leibler (KL) divergence, which describes the relative entropy of two probability distributions. As a case study, we analyze the delete-d jackknife estimator for the covariance matrix of the galaxy correlation function. We introduce the information factor or the normalized KL divergence with the help of a set of baseline covariance matrices to diagnose the information contained in the jackknife covariance matrix. Using a set of quick particle mesh mock catalogs designed for the Baryon Oscillation Spectroscopic Survey DR11 CMASS galaxy survey, we find that the jackknife resampling method succeeds in recovering the covariance matrix with 10 times fewer simulation mocks than that of the baseline method at small scales (s ≤ 40 h −1 Mpc). However, the ability to reduce the number of mock catalogs is degraded at larger scales due to the increasing bias on the jackknife covariance matrix. Note that the analysis in this paper can be applied to any fast estimator of the covariance matrix for galaxy clustering measurements.} 1387 1387 abstract = {We present a method to quantify the convergence rate of the fast estimators of the covariance matrices in the large-scale structure analysis. Our method is based on the Kullback–Leibler (KL) divergence, which describes the relative entropy of two probability distributions. As a case study, we analyze the delete-d jackknife estimator for the covariance matrix of the galaxy correlation function. We introduce the information factor or the normalized KL divergence with the help of a set of baseline covariance matrices to diagnose the information contained in the jackknife covariance matrix. Using a set of quick particle mesh mock catalogs designed for the Baryon Oscillation Spectroscopic Survey DR11 CMASS galaxy survey, we find that the jackknife resampling method succeeds in recovering the covariance matrix with 10 times fewer simulation mocks than that of the baseline method at small scales (s ≤ 40 h −1 Mpc). However, the ability to reduce the number of mock catalogs is degraded at larger scales due to the increasing bias on the jackknife covariance matrix. Note that the analysis in this paper can be applied to any fast estimator of the covariance matrix for galaxy clustering measurements.}
} 1388 1388 }
1389 1389
@Article{Kim2024, 1390 1390 @Article{Kim2024,
author={Kim, Wonjik}, 1391 1391 author={Kim, Wonjik},
title={A Random Focusing Method with Jensen--Shannon Divergence for Improving Deep Neural Network Performance Ensuring Architecture Consistency}, 1392 1392 title={A Random Focusing Method with Jensen--Shannon Divergence for Improving Deep Neural Network Performance Ensuring Architecture Consistency},
journal={Neural Processing Letters}, 1393 1393 journal={Neural Processing Letters},
year={2024}, 1394 1394 year={2024},
month={Jun}, 1395 1395 month={Jun},
day={17}, 1396 1396 day={17},
volume={56}, 1397 1397 volume={56},
number={4}, 1398 1398 number={4},
pages={199}, 1399 1399 pages={199},
abstract={Multiple hidden layers in deep neural networks perform non-linear transformations, enabling the extraction of meaningful features and the identification of relationships between input and output data. However, the gap between the training and real-world data can result in network overfitting, prompting the exploration of various preventive methods. The regularization technique called 'dropout' is widely used for deep learning models to improve the training of robust and generalized features. During the training phase with dropout, neurons in a particular layer are randomly selected to be ignored for each input. This random exclusion of neurons encourages the network to depend on different subsets of neurons at different times, fostering robustness and reducing sensitivity to specific neurons. This study introduces a novel approach called random focusing, departing from complete neuron exclusion in dropout. The proposed random focusing selectively highlights random neurons during training, aiming for a smoother transition between training and inference phases while keeping network architecture consistent. This study also incorporates Jensen--Shannon Divergence to enhance the stability and efficacy of the random focusing method. Experimental validation across tasks like image classification and semantic segmentation demonstrates the adaptability of the proposed methods across different network architectures, including convolutional neural networks and transformers.}, 1400 1400 abstract={Multiple hidden layers in deep neural networks perform non-linear transformations, enabling the extraction of meaningful features and the identification of relationships between input and output data. However, the gap between the training and real-world data can result in network overfitting, prompting the exploration of various preventive methods. The regularization technique called 'dropout' is widely used for deep learning models to improve the training of robust and generalized features. During the training phase with dropout, neurons in a particular layer are randomly selected to be ignored for each input. This random exclusion of neurons encourages the network to depend on different subsets of neurons at different times, fostering robustness and reducing sensitivity to specific neurons. This study introduces a novel approach called random focusing, departing from complete neuron exclusion in dropout. The proposed random focusing selectively highlights random neurons during training, aiming for a smoother transition between training and inference phases while keeping network architecture consistent. This study also incorporates Jensen--Shannon Divergence to enhance the stability and efficacy of the random focusing method. Experimental validation across tasks like image classification and semantic segmentation demonstrates the adaptability of the proposed methods across different network architectures, including convolutional neural networks and transformers.},
issn={1573-773X}, 1401 1401 issn={1573-773X},
doi={10.1007/s11063-024-11668-z}, 1402 1402 doi={10.1007/s11063-024-11668-z},
url={https://doi.org/10.1007/s11063-024-11668-z} 1403 1403 url={https://doi.org/10.1007/s11063-024-11668-z}
} 1404 1404 }
1405 1405
@InProceedings{pmlr-v238-ou24a, 1406 1406 @InProceedings{pmlr-v238-ou24a,
title = {Thompson Sampling Itself is Differentially Private}, 1407 1407 title = {Thompson Sampling Itself is Differentially Private},
author = {Ou, Tingting and Cummings, Rachel and Avella Medina, Marco}, 1408 1408 author = {Ou, Tingting and Cummings, Rachel and Avella Medina, Marco},
booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, 1409 1409 booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics},
pages = {1576--1584}, 1410 1410 pages = {1576--1584},
year = {2024}, 1411 1411 year = {2024},
editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, 1412 1412 editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen},
volume = {238}, 1413 1413 volume = {238},
series = {Proceedings of Machine Learning Research}, 1414 1414 series = {Proceedings of Machine Learning Research},
month = {02--04 May}, 1415 1415 month = {02--04 May},
publisher = {PMLR}, 1416 1416 publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v238/ou24a/ou24a.pdf}, 1417 1417 pdf = {https://proceedings.mlr.press/v238/ou24a/ou24a.pdf},
url = {https://proceedings.mlr.press/v238/ou24a.html}, 1418 1418 url = {https://proceedings.mlr.press/v238/ou24a.html},
abstract = {In this work we first show that the classical Thompson sampling algorithm for multi-arm bandits is differentially private as-is, without any modification. We provide per-round privacy guarantees as a function of problem parameters and show composition over $T$ rounds; since the algorithm is unchanged, existing $O(\sqrt{NT\log N})$ regret bounds still hold and there is no loss in performance due to privacy. We then show that simple modifications – such as pre-pulling all arms a fixed number of times, increasing the sampling variance – can provide tighter privacy guarantees. We again provide privacy guarantees that now depend on the new parameters introduced in the modification, which allows the analyst to tune the privacy guarantee as desired. We also provide a novel regret analysis for this new algorithm, and show how the new parameters also impact expected regret. Finally, we empirically validate and illustrate our theoretical findings in two parameter regimes and demonstrate that tuning the new parameters substantially improve the privacy-regret tradeoff.} 1419 1419 abstract = {In this work we first show that the classical Thompson sampling algorithm for multi-arm bandits is differentially private as-is, without any modification. We provide per-round privacy guarantees as a function of problem parameters and show composition over $T$ rounds; since the algorithm is unchanged, existing $O(\sqrt{NT\log N})$ regret bounds still hold and there is no loss in performance due to privacy. We then show that simple modifications – such as pre-pulling all arms a fixed number of times, increasing the sampling variance – can provide tighter privacy guarantees. We again provide privacy guarantees that now depend on the new parameters introduced in the modification, which allows the analyst to tune the privacy guarantee as desired. We also provide a novel regret analysis for this new algorithm, and show how the new parameters also impact expected regret. Finally, we empirically validate and illustrate our theoretical findings in two parameter regimes and demonstrate that tuning the new parameters substantially improve the privacy-regret tradeoff.}
} 1420 1420 }
1421 1421
@Article{math12111758, 1422 1422 @Article{math12111758,
AUTHOR = {Uguina, Antonio R. and Gomez, Juan F. and Panadero, Javier and Martínez-Gavara, Anna and Juan, Angel A.}, 1423 1423 AUTHOR = {Uguina, Antonio R. and Gomez, Juan F. and Panadero, Javier and Martínez-Gavara, Anna and Juan, Angel A.},
TITLE = {A Learnheuristic Algorithm Based on Thompson Sampling for the Heterogeneous and Dynamic Team Orienteering Problem}, 1424 1424 TITLE = {A Learnheuristic Algorithm Based on Thompson Sampling for the Heterogeneous and Dynamic Team Orienteering Problem},
JOURNAL = {Mathematics}, 1425 1425 JOURNAL = {Mathematics},
VOLUME = {12}, 1426 1426 VOLUME = {12},
YEAR = {2024}, 1427 1427 YEAR = {2024},
NUMBER = {11}, 1428 1428 NUMBER = {11},
ARTICLE-NUMBER = {1758}, 1429 1429 ARTICLE-NUMBER = {1758},
URL = {https://www.mdpi.com/2227-7390/12/11/1758}, 1430 1430 URL = {https://www.mdpi.com/2227-7390/12/11/1758},
ISSN = {2227-7390}, 1431 1431 ISSN = {2227-7390},
ABSTRACT = {The team orienteering problem (TOP) is a well-studied optimization challenge in the field of Operations Research, where multiple vehicles aim to maximize the total collected rewards within a given time limit by visiting a subset of nodes in a network. With the goal of including dynamic and uncertain conditions inherent in real-world transportation scenarios, we introduce a novel dynamic variant of the TOP that considers real-time changes in environmental conditions affecting reward acquisition at each node. Specifically, we model the dynamic nature of environmental factors—such as traffic congestion, weather conditions, and battery level of each vehicle—to reflect their impact on the probability of obtaining the reward when visiting each type of node in a heterogeneous network. To address this problem, a learnheuristic optimization framework is proposed. It combines a metaheuristic algorithm with Thompson sampling to make informed decisions in dynamic environments. Furthermore, we conduct empirical experiments to assess the impact of varying reward probabilities on resource allocation and route planning within the context of this dynamic TOP, where nodes might offer a different reward behavior depending upon the environmental conditions. Our numerical results indicate that the proposed learnheuristic algorithm outperforms static approaches, achieving up to 25% better performance in highly dynamic scenarios. Our findings highlight the effectiveness of our approach in adapting to dynamic conditions and optimizing decision-making processes in transportation systems.}, 1432 1432 ABSTRACT = {The team orienteering problem (TOP) is a well-studied optimization challenge in the field of Operations Research, where multiple vehicles aim to maximize the total collected rewards within a given time limit by visiting a subset of nodes in a network. With the goal of including dynamic and uncertain conditions inherent in real-world transportation scenarios, we introduce a novel dynamic variant of the TOP that considers real-time changes in environmental conditions affecting reward acquisition at each node. Specifically, we model the dynamic nature of environmental factors—such as traffic congestion, weather conditions, and battery level of each vehicle—to reflect their impact on the probability of obtaining the reward when visiting each type of node in a heterogeneous network. To address this problem, a learnheuristic optimization framework is proposed. It combines a metaheuristic algorithm with Thompson sampling to make informed decisions in dynamic environments. Furthermore, we conduct empirical experiments to assess the impact of varying reward probabilities on resource allocation and route planning within the context of this dynamic TOP, where nodes might offer a different reward behavior depending upon the environmental conditions. Our numerical results indicate that the proposed learnheuristic algorithm outperforms static approaches, achieving up to 25% better performance in highly dynamic scenarios. Our findings highlight the effectiveness of our approach in adapting to dynamic conditions and optimizing decision-making processes in transportation systems.},
DOI = {10.3390/math12111758} 1433 1433 DOI = {10.3390/math12111758}
} 1434 1434 }
1435 1435
@inproceedings{NEURIPS2023_9d8cf124, 1436 1436 @inproceedings{NEURIPS2023_9d8cf124,
author = {Abel, David and Barreto, Andre and Van Roy, Benjamin and Precup, Doina and van Hasselt, Hado P and Singh, Satinder}, 1437 1437 author = {Abel, David and Barreto, Andre and Van Roy, Benjamin and Precup, Doina and van Hasselt, Hado P and Singh, Satinder},
booktitle = {Advances in Neural Information Processing Systems}, 1438 1438 booktitle = {Advances in Neural Information Processing Systems},
editor = {A. Oh and T. Naumann and A. Globerson and K. Saenko and M. Hardt and S. Levine}, 1439 1439 editor = {A. Oh and T. Naumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
pages = {50377--50407}, 1440 1440 pages = {50377--50407},
publisher = {Curran Associates, Inc.}, 1441 1441 publisher = {Curran Associates, Inc.},
title = {A Definition of Continual Reinforcement Learning}, 1442 1442 title = {A Definition of Continual Reinforcement Learning},
url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/9d8cf1247786d6dfeefeeb53b8b5f6d7-Paper-Conference.pdf}, 1443 1443 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/9d8cf1247786d6dfeefeeb53b8b5f6d7-Paper-Conference.pdf},
volume = {36}, 1444 1444 volume = {36},
year = {2023} 1445 1445 year = {2023}
} 1446 1446 }
1447 1447
@article{NGUYEN2024111566, 1448 1448 @article{NGUYEN2024111566,
title = {Dynamic metaheuristic selection via Thompson Sampling for online optimization}, 1449 1449 title = {Dynamic metaheuristic selection via Thompson Sampling for online optimization},
journal = {Applied Soft Computing}, 1450 1450 journal = {Applied Soft Computing},
volume = {158}, 1451 1451 volume = {158},
pages = {111566}, 1452 1452 pages = {111566},
year = {2024}, 1453 1453 year = {2024},
issn = {1568-4946}, 1454 1454 issn = {1568-4946},
doi = {https://doi.org/10.1016/j.asoc.2024.111566}, 1455 1455 doi = {https://doi.org/10.1016/j.asoc.2024.111566},
url = {https://www.sciencedirect.com/science/article/pii/S1568494624003405}, 1456 1456 url = {https://www.sciencedirect.com/science/article/pii/S1568494624003405},
author = {Alain Nguyen}, 1457 1457 author = {Alain Nguyen},
keywords = {Selection hyper-heuristic, Multi-armed-bandit, Thompson Sampling, Online optimization}, 1458 1458 keywords = {Selection hyper-heuristic, Multi-armed-bandit, Thompson Sampling, Online optimization},
abstract = {It is acknowledged that no single heuristic can outperform all the others in every optimization problem. This has given rise to hyper-heuristic methods for providing solutions to a wider range of problems. In this work, a set of five non-competing low-level heuristics is proposed in a hyper-heuristic framework. The multi-armed bandit problem analogy is efficiently leveraged and Thompson Sampling is used to actively select the best heuristic for online optimization. The proposed method is compared against ten population-based metaheuristic algorithms on the well-known CEC’05 optimizing benchmark consisting of 23 functions of various landscapes. The results show that the proposed algorithm is the only one able to find the global minimum of all functions with remarkable consistency.} 1459 1459 abstract = {It is acknowledged that no single heuristic can outperform all the others in every optimization problem. This has given rise to hyper-heuristic methods for providing solutions to a wider range of problems. In this work, a set of five non-competing low-level heuristics is proposed in a hyper-heuristic framework. The multi-armed bandit problem analogy is efficiently leveraged and Thompson Sampling is used to actively select the best heuristic for online optimization. The proposed method is compared against ten population-based metaheuristic algorithms on the well-known CEC’05 optimizing benchmark consisting of 23 functions of various landscapes. The results show that the proposed algorithm is the only one able to find the global minimum of all functions with remarkable consistency.}
} 1460 1460 }
1461 1461
@Article{Malladi2024, 1462 1462 @Article{Malladi2024,
author={Malladi, Rama K.}, 1463 1463 author={Malladi, Rama K.},
title={Application of Supervised Machine Learning Techniques to Forecast the COVID-19 U.S. Recession and Stock Market Crash}, 1464 1464 title={Application of Supervised Machine Learning Techniques to Forecast the COVID-19 U.S. Recession and Stock Market Crash},
journal={Computational Economics}, 1465 1465 journal={Computational Economics},
year={2024}, 1466 1466 year={2024},
month={Mar}, 1467 1467 month={Mar},
day={01}, 1468 1468 day={01},
volume={63}, 1469 1469 volume={63},
number={3}, 1470 1470 number={3},
pages={1021-1045}, 1471 1471 pages={1021-1045},
abstract={Machine learning (ML), a transformational technology, has been successfully applied to forecasting events down the road. This paper demonstrates that supervised ML techniques can be used in recession and stock market crash (more than 20{\%} drawdown) forecasting. After learning from strictly past monthly data, ML algorithms detected the Covid-19 recession by December 2019, six months before the official NBER announcement. Moreover, ML algorithms foresaw the March 2020 S{\&}P500 crash two months before it happened. The current labor market and housing are harbingers of a future U.S. recession (in 3 months). Financial factors have a bigger role to play in stock market crashes than economic factors. The labor market appears as a top-two feature in predicting both recessions and crashes. ML algorithms detect that the U.S. exited recession before December 2020, even though the official NBER announcement has not yet been made. They also do not anticipate a U.S. stock market crash before March 2021. ML methods have three times higher false discovery rates of recessions compared to crashes.}, 1472 1472 abstract={Machine learning (ML), a transformational technology, has been successfully applied to forecasting events down the road. This paper demonstrates that supervised ML techniques can be used in recession and stock market crash (more than 20{\%} drawdown) forecasting. After learning from strictly past monthly data, ML algorithms detected the Covid-19 recession by December 2019, six months before the official NBER announcement. Moreover, ML algorithms foresaw the March 2020 S{\&}P500 crash two months before it happened. The current labor market and housing are harbingers of a future U.S. recession (in 3 months). Financial factors have a bigger role to play in stock market crashes than economic factors. The labor market appears as a top-two feature in predicting both recessions and crashes. ML algorithms detect that the U.S. exited recession before December 2020, even though the official NBER announcement has not yet been made. They also do not anticipate a U.S. stock market crash before March 2021. ML methods have three times higher false discovery rates of recessions compared to crashes.},
issn={1572-9974}, 1473 1473 issn={1572-9974},
doi={10.1007/s10614-022-10333-8}, 1474 1474 doi={10.1007/s10614-022-10333-8},
url={https://doi.org/10.1007/s10614-022-10333-8} 1475 1475 url={https://doi.org/10.1007/s10614-022-10333-8}
} 1476 1476 }
1477 1477
@INPROCEEDINGS{10493943, 1478 1478 @INPROCEEDINGS{10493943,
author={Raaa Subha and Naaa Gayathri and Saaa Sasireka and Raaa Sathiyabanu and Baaa Santhiyaa and Baaa Varshini}, 1479 1479 author={Raaa Subha and Naaa Gayathri and Saaa Sasireka and Raaa Sathiyabanu and Baaa Santhiyaa and Baaa Varshini},
booktitle={2024 5th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI)}, 1480 1480 booktitle={2024 5th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI)},
title={Intelligent Tutoring Systems using Long Short-Term Memory Networks and Bayesian Knowledge Tracing}, 1481 1481 title={Intelligent Tutoring Systems using Long Short-Term Memory Networks and Bayesian Knowledge Tracing},
year={2024}, 1482 1482 year={2024},
volume={0}, 1483 1483 volume={0},
number={0}, 1484 1484 number={0},
pages={24-29}, 1485 1485 pages={24-29},
keywords={Knowledge engineering;Filtering;Estimation;Transforms;Real-time systems;Bayes methods;Problem-solving;Intelligent Tutoring System (ITS);Long Short-Term Memory (LSTM);Bayesian Knowledge Tracing (BKT);Reinforcement Learning}, 1486 1486 keywords={Knowledge engineering;Filtering;Estimation;Transforms;Real-time systems;Bayes methods;Problem-solving;Intelligent Tutoring System (ITS);Long Short-Term Memory (LSTM);Bayesian Knowledge Tracing (BKT);Reinforcement Learning},
doi={10.1109/ICMCSI61536.2024.00010} 1487 1487 doi={10.1109/ICMCSI61536.2024.00010}
} 1488 1488 }
1489 1489
@article{https://doi.org/10.1155/2024/4067721, 1490 1490 @article{https://doi.org/10.1155/2024/4067721,
author = {Ahmed, Esmael}, 1491 1491 author = {Ahmed, Esmael},
title = {Student Performance Prediction Using Machine Learning Algorithms}, 1492 1492 title = {Student Performance Prediction Using Machine Learning Algorithms},
journal = {Applied Computational Intelligence and Soft Computing}, 1493 1493 journal = {Applied Computational Intelligence and Soft Computing},
volume = {2024}, 1494 1494 volume = {2024},
number = {1}, 1495 1495 number = {1},
pages = {4067721}, 1496 1496 pages = {4067721},
doi = {https://doi.org/10.1155/2024/4067721}, 1497 1497 doi = {https://doi.org/10.1155/2024/4067721},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2024/4067721}, 1498 1498 url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2024/4067721},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2024/4067721}, 1499 1499 eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2024/4067721},
abstract = {Education is crucial for a productive life and providing necessary resources. With the advent of technology like artificial intelligence, higher education institutions are incorporating technology into traditional teaching methods. Predicting academic success has gained interest in education as a strong academic record improves a university’s ranking and increases student employment opportunities. Modern learning institutions face challenges in analyzing performance, providing high-quality education, formulating strategies for evaluating students’ performance, and identifying future needs. E-learning is a rapidly growing and advanced form of education, where students enroll in online courses. Platforms like Intelligent Tutoring Systems (ITS), learning management systems (LMS), and massive open online courses (MOOC) use educational data mining (EDM) to develop automatic grading systems, recommenders, and adaptative systems. However, e-learning is still considered a challenging learning environment due to the lack of direct interaction between students and course instructors. Machine learning (ML) is used in developing adaptive intelligent systems that can perform complex tasks beyond human abilities. Some areas of applications of ML algorithms include cluster analysis, pattern recognition, image processing, natural language processing, and medical diagnostics. In this research work, K-means, a clustering data mining technique using Davies’ Bouldin method, obtains clusters to find important features affecting students’ performance. The study found that the SVM algorithm had the best prediction results after parameter adjustment, with a 96\% accuracy rate. In this paper, the researchers have examined the functions of the Support Vector Machine, Decision Tree, naive Bayes, and KNN classifiers. The outcomes of parameter adjustment greatly increased the accuracy of the four prediction models. Naïve Bayes model’s prediction accuracy is the lowest when compared to other prediction methods, as it assumes a strong independent relationship between features.}, 1500 1500 abstract = {Education is crucial for a productive life and providing necessary resources. With the advent of technology like artificial intelligence, higher education institutions are incorporating technology into traditional teaching methods. Predicting academic success has gained interest in education as a strong academic record improves a university’s ranking and increases student employment opportunities. Modern learning institutions face challenges in analyzing performance, providing high-quality education, formulating strategies for evaluating students’ performance, and identifying future needs. E-learning is a rapidly growing and advanced form of education, where students enroll in online courses. Platforms like Intelligent Tutoring Systems (ITS), learning management systems (LMS), and massive open online courses (MOOC) use educational data mining (EDM) to develop automatic grading systems, recommenders, and adaptative systems. However, e-learning is still considered a challenging learning environment due to the lack of direct interaction between students and course instructors. Machine learning (ML) is used in developing adaptive intelligent systems that can perform complex tasks beyond human abilities. Some areas of applications of ML algorithms include cluster analysis, pattern recognition, image processing, natural language processing, and medical diagnostics. In this research work, K-means, a clustering data mining technique using Davies’ Bouldin method, obtains clusters to find important features affecting students’ performance. The study found that the SVM algorithm had the best prediction results after parameter adjustment, with a 96\% accuracy rate. In this paper, the researchers have examined the functions of the Support Vector Machine, Decision Tree, naive Bayes, and KNN classifiers. The outcomes of parameter adjustment greatly increased the accuracy of the four prediction models. Naïve Bayes model’s prediction accuracy is the lowest when compared to other prediction methods, as it assumes a strong independent relationship between features.},
year = {2024} 1501 1501 year = {2024}
} 1502 1502 }
1503 1503
@article{HAZEM, 1504 1504 @article{HAZEM,
author = {Hazem A. Alrakhawi and Nurullizam Jamiat and Samy S. Abu-Naser}, 1505 1505 author = {Hazem A. Alrakhawi and Nurullizam Jamiat and Samy S. Abu-Naser},
title = {Intelligent Tutoring Systems in education: A systematic review of usage, tools, effects and evaluation}, 1506 1506 title = {Intelligent Tutoring Systems in education: A systematic review of usage, tools, effects and evaluation},
journal = {Journal of Theoretical and Applied Information Technology}, 1507 1507 journal = {Journal of Theoretical and Applied Information Technology},
volume = {2023}, 1508 1508 volume = {2023},
number = {4}, 1509 1509 number = {4},
pages = {4067721}, 1510 1510 pages = {4067721},
doi = {}, 1511 1511 doi = {},
url = {}, 1512 1512 url = {},
abstract = {}, 1513 1513 abstract = {},
year = {2023} 1514 1514 year = {2023}
} 1515 1515 }
1516 1516
@Article{Liu2023, 1517 1517 @Article{Liu2023,
author={Liu, Mengchi 1518 1518 author={Liu, Mengchi
and Yu, Dongmei}, 1519 1519 and Yu, Dongmei},
title={Towards intelligent E-learning systems}, 1520 1520 title={Towards intelligent E-learning systems},
journal={Education and Information Technologies}, 1521 1521 journal={Education and Information Technologies},
year={2023}, 1522 1522 year={2023},
month={Jul}, 1523 1523 month={Jul},
day={01}, 1524 1524 day={01},
volume={28}, 1525 1525 volume={28},
number={7}, 1526 1526 number={7},
pages={7845-7876}, 1527 1527 pages={7845-7876},
abstract={The prevalence of e-learning systems has made educational resources more accessible, interactive and effective to learners without the geographic and temporal boundaries. However, as the number of users increases and the volume of data grows, current e-learning systems face some technical and pedagogical challenges. This paper provides a comprehensive review on the efforts of applying new information and communication technologies to improve e-learning services. We first systematically investigate current e-learning systems in terms of their classification, architecture, functions, challenges, and current trends. We then present a general architecture for big data based e-learning systems to meet the ever-growing demand for e-learning. We also describe how to use data generated in big data based e-learning systems to support more flexible and customized course delivery and personalized learning.}, 1528 1528 abstract={The prevalence of e-learning systems has made educational resources more accessible, interactive and effective to learners without the geographic and temporal boundaries. However, as the number of users increases and the volume of data grows, current e-learning systems face some technical and pedagogical challenges. This paper provides a comprehensive review on the efforts of applying new information and communication technologies to improve e-learning services. We first systematically investigate current e-learning systems in terms of their classification, architecture, functions, challenges, and current trends. We then present a general architecture for big data based e-learning systems to meet the ever-growing demand for e-learning. We also describe how to use data generated in big data based e-learning systems to support more flexible and customized course delivery and personalized learning.},
issn={1573-7608}, 1529 1529 issn={1573-7608},
doi={10.1007/s10639-022-11479-6}, 1530 1530 doi={10.1007/s10639-022-11479-6},
url={https://doi.org/10.1007/s10639-022-11479-6} 1531 1531 url={https://doi.org/10.1007/s10639-022-11479-6}
} 1532 1532 }
1533 1533
@InProceedings{10.1007/978-3-031-63646-2_13, 1534 1534 @InProceedings{10.1007/978-3-031-63646-2_13,
author="Soto-Forero, Daniel 1535 1535 author="Soto-Forero, Daniel
and Ackermann, Simha 1536 1536 and Ackermann, Simha
and Betbeder, Marie-Laure 1537 1537 and Betbeder, Marie-Laure
and Henriet, Julien", 1538 1538 and Henriet, Julien",
editor="Recio-Garcia, Juan A. 1539 1539 editor="Recio-Garcia, Juan A.
and Orozco-del-Castillo, Mauricio G. 1540 1540 and Orozco-del-Castillo, Mauricio G.
and Bridge, Derek", 1541 1541 and Bridge, Derek",
title="The Intelligent Tutoring System AI-VT with Case-Based Reasoning and Real Time Recommender Models", 1542 1542 title="The Intelligent Tutoring System AI-VT with Case-Based Reasoning and Real Time Recommender Models",
booktitle="Case-Based Reasoning Research and Development", 1543 1543 booktitle="Case-Based Reasoning Research and Development",
year="2024", 1544 1544 year="2024",
publisher="Springer Nature Switzerland", 1545 1545 publisher="Springer Nature Switzerland",
address="Cham", 1546 1546 address="Cham",
pages="191--205", 1547 1547 pages="191--205",
abstract="This paper presents a recommendation model coupled on an existing CBR system model through a new modular architecture designed to integrate multiple services in a learning system called AI-VT (Artificial Intelligence Training System). The recommendation model provides a semi-automatic review of the CBR, two variants of the recommendation model have been implemented: deterministic and stochastic. The model has been tested with 1000 simulated learners, and compared with an original CBR system and BKT (Bayesian Knowledge Tracing) recommender system. The results show that the proposed model identifies learners' weaknesses correctly and revises the content of the ITS (Intelligent Tutoring System) better than the original ITS with CBR. Compared to BKT, the results at each level of complexity are variable, but overall the proposed stochastic model obtains better results.", 1548 1548 abstract="This paper presents a recommendation model coupled on an existing CBR system model through a new modular architecture designed to integrate multiple services in a learning system called AI-VT (Artificial Intelligence Training System). The recommendation model provides a semi-automatic review of the CBR, two variants of the recommendation model have been implemented: deterministic and stochastic. The model has been tested with 1000 simulated learners, and compared with an original CBR system and BKT (Bayesian Knowledge Tracing) recommender system. The results show that the proposed model identifies learners' weaknesses correctly and revises the content of the ITS (Intelligent Tutoring System) better than the original ITS with CBR. Compared to BKT, the results at each level of complexity are variable, but overall the proposed stochastic model obtains better results.",
isbn="978-3-031-63646-2" 1549 1549 isbn="978-3-031-63646-2"
} 1550 1550 }
1551 1551
@article{doi:10.1137/23M1592420, 1552 1552 @article{doi:10.1137/23M1592420,
author = {Minsker, Stanislav and Strawn, Nate}, 1553 1553 author = {Minsker, Stanislav and Strawn, Nate},
title = {The Geometric Median and Applications to Robust Mean Estimation}, 1554 1554 title = {The Geometric Median and Applications to Robust Mean Estimation},
journal = {SIAM Journal on Mathematics of Data Science}, 1555 1555 journal = {SIAM Journal on Mathematics of Data Science},
volume = {6}, 1556 1556 volume = {6},
number = {2}, 1557 1557 number = {2},
pages = {504-533}, 1558 1558 pages = {504-533},
year = {2024}, 1559 1559 year = {2024},
doi = {10.1137/23M1592420}, 1560 1560 doi = {10.1137/23M1592420},
URL = { https://doi.org/10.1137/23M1592420}, 1561 1561 URL = { https://doi.org/10.1137/23M1592420},
eprint = {https://doi.org/10.1137/23M1592420}, 1562 1562 eprint = {https://doi.org/10.1137/23M1592420},
abstract = { Abstract.This paper is devoted to the statistical and numerical properties of the geometric median and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in \(\mathbb R^d\), and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension \(d\); (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a “local quadratic growth” condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments, including the application to estimation of mean values of log-returns for S\&P 500 data. } 1563 1563 abstract = { Abstract.This paper is devoted to the statistical and numerical properties of the geometric median and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in \(\mathbb R^d\), and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension \(d\); (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a “local quadratic growth” condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments, including the application to estimation of mean values of log-returns for S\&P 500 data. }
} 1564 1564 }
1565 1565
@article{lei2024analysis, 1566 1566 @article{lei2024analysis,
title={Analysis of Simpson’s Paradox and Its Applications}, 1567 1567 title={Analysis of Simpson’s Paradox and Its Applications},
author={Lei, Zhihao}, 1568 1568 author={Lei, Zhihao},
journal={Highlights in Science, Engineering and Technology}, 1569 1569 journal={Highlights in Science, Engineering and Technology},
volume={88}, 1570 1570 volume={88},
pages={357--362}, 1571 1571 pages={357--362},
year={2024} 1572 1572 year={2024}
} 1573 1573 }
1574 1574
@InProceedings{pmlr-v108-seznec20a, 1575 1575 @InProceedings{pmlr-v108-seznec20a,
title = {A single algorithm for both restless and rested rotting bandits}, 1576 1576 title = {A single algorithm for both restless and rested rotting bandits},
author = {Seznec, Julien and Menard, Pierre and Lazaric, Alessandro and Valko, Michal}, 1577 1577 author = {Seznec, Julien and Menard, Pierre and Lazaric, Alessandro and Valko, Michal},
booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, 1578 1578 booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics},
pages = {3784--3794}, 1579 1579 pages = {3784--3794},
year = {2020}, 1580 1580 year = {2020},
editor = {Chiappa, Silvia and Calandra, Roberto}, 1581 1581 editor = {Chiappa, Silvia and Calandra, Roberto},
volume = {108}, 1582 1582 volume = {108},
series = {Proceedings of Machine Learning Research}, 1583 1583 series = {Proceedings of Machine Learning Research},
month = {26--28 Aug}, 1584 1584 month = {26--28 Aug},
publisher = {PMLR}, 1585 1585 publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v108/seznec20a/seznec20a.pdf}, 1586 1586 pdf = {http://proceedings.mlr.press/v108/seznec20a/seznec20a.pdf},
url = {https://proceedings.mlr.press/v108/seznec20a.html}, 1587 1587 url = {https://proceedings.mlr.press/v108/seznec20a.html},
abstract = {In many application domains (e.g., recommender systems, intelligent tutoring systems), the rewards associated to the available actions tend to decrease over time. This decay is either caused by the actions executed in the past (e.g., a user may get bored when songs of the same genre are recommended over and over) or by an external factor (e.g., content becomes outdated). These two situations can be modeled as specific instances of the rested and restless bandit settings, where arms are rotting (i.e., their value decrease over time). These problems were thought to be significantly different, since Levine et al. (2017) showed that state-of-the-art algorithms for restless bandit perform poorly in the rested rotting setting. In this paper, we introduce a novel algorithm, Rotting Adaptive Window UCB (RAW-UCB), that achieves near-optimal regret in both rotting rested and restless bandit, without any prior knowledge of the setting (rested or restless) and the type of non-stationarity (e.g., piece-wise constant, bounded variation). This is in striking contrast with previous negative results showing that no algorithm can achieve similar results as soon as rewards are allowed to increase. We confirm our theoretical findings on a number of synthetic and dataset-based experiments.} 1588 1588 abstract = {In many application domains (e.g., recommender systems, intelligent tutoring systems), the rewards associated to the available actions tend to decrease over time. This decay is either caused by the actions executed in the past (e.g., a user may get bored when songs of the same genre are recommended over and over) or by an external factor (e.g., content becomes outdated). These two situations can be modeled as specific instances of the rested and restless bandit settings, where arms are rotting (i.e., their value decrease over time). These problems were thought to be significantly different, since Levine et al. (2017) showed that state-of-the-art algorithms for restless bandit perform poorly in the rested rotting setting. In this paper, we introduce a novel algorithm, Rotting Adaptive Window UCB (RAW-UCB), that achieves near-optimal regret in both rotting rested and restless bandit, without any prior knowledge of the setting (rested or restless) and the type of non-stationarity (e.g., piece-wise constant, bounded variation). This is in striking contrast with previous negative results showing that no algorithm can achieve similar results as soon as rewards are allowed to increase. We confirm our theoretical findings on a number of synthetic and dataset-based experiments.}
} 1589 1589 }
1590 1590
@article{doi:10.3233/AIC-1994-7104, 1591 1591 @article{doi:10.3233/AIC-1994-7104,
author = {Agnar Aamodt and Enric Plaza}, 1592 1592 author = {Agnar Aamodt and Enric Plaza},
title ={Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches}, 1593 1593 title ={Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches},
journal = {AI Communications}, 1594 1594 journal = {AI Communications},
volume = {7}, 1595 1595 volume = {7},
number = {1}, 1596 1596 number = {1},
pages = {39-59}, 1597 1597 pages = {39-59},
year = {1994}, 1598 1598 year = {1994},
doi = {10.3233/AIC-1994-7104}, 1599 1599 doi = {10.3233/AIC-1994-7104},
URL = { 1600 1600 URL = {
https://journals.sagepub.com/doi/abs/10.3233/AIC-1994-7104 1601 1601 https://journals.sagepub.com/doi/abs/10.3233/AIC-1994-7104
}, 1602 1602 },
eprint = { 1603 1603 eprint = {
https://journals.sagepub.com/doi/pdf/10.3233/AIC-1994-7104 1604 1604 https://journals.sagepub.com/doi/pdf/10.3233/AIC-1994-7104
}, 1605 1605 },
abstract = { Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture. } 1606 1606 abstract = { Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture. }
} 1607 1607 }
1608 1608
@Book{schank+abelson77, 1609 1609 @Book{schank+abelson77,
author = {Roger C. Schank and Robert P. Abelson}, 1610 1610 author = {Roger C. Schank and Robert P. Abelson},
title = {Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures}, 1611 1611 title = {Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures},
publisher = {L. Erlbaum}, 1612 1612 publisher = {L. Erlbaum},
year = {1977}, 1613 1613 year = {1977},
address = {Hillsdale, NJ}, 1614 1614 address = {Hillsdale, NJ},
keywords = {PAM, SAM, TALE-SPIN, causality, conceptual dependency, goals, plans, scripts, semantic primitive, text understanding} 1615 1615 keywords = {PAM, SAM, TALE-SPIN, causality, conceptual dependency, goals, plans, scripts, semantic primitive, text understanding}
} 1616 1616 }
1617 1617
@article{KOLODNER1983281, 1618 1618 @article{KOLODNER1983281,
title = {Reconstructive memory: A computer model}, 1619 1619 title = {Reconstructive memory: A computer model},
journal = {Cognitive Science}, 1620 1620 journal = {Cognitive Science},
volume = {7}, 1621 1621 volume = {7},
number = {4}, 1622 1622 number = {4},
pages = {281-328}, 1623 1623 pages = {281-328},
year = {1983}, 1624 1624 year = {1983},
issn = {0364-0213}, 1625 1625 issn = {0364-0213},
doi = {https://doi.org/10.1016/S0364-0213(83)80002-0}, 1626 1626 doi = {https://doi.org/10.1016/S0364-0213(83)80002-0},
url = {https://www.sciencedirect.com/science/article/pii/S0364021383800020}, 1627 1627 url = {https://www.sciencedirect.com/science/article/pii/S0364021383800020},
author = {Janet L. Kolodner}, 1628 1628 author = {Janet L. Kolodner},
abstract = {This study presents a process model of very long-term episodic memory. The process presented is a reconstructive process. The process involves application of three kinds of reconstructive strategies—component-to-context instantiation strategies, component-instantiation strategies, and context-to-context instantiation strategies. The first is used to direct search to appropriate conceptual categories in memory. The other two are used to direct search within the chosen conceptual category. A fourth type of strategy, called executive search strategies, guide search for concepts related to the one targeted for retrieval. A conceptual memory organization implied by human reconstructive memory is presented along with examples which motivate it. A basic retrieval algorithm is presented for traversing that stucture. Retrieval strategies arise from failures in that algorithm. The memory organization and retrieval processes are implemented in a computer program called CYRUS which stores events in the lives of former Secretaries of State Cyrus Vance and Edmund Muskie and answers questions posed in English concerning that information. Examples which motivate the process model are drawn from protocols of human memory search. Examples of CYRUS'S behavior demonstrate the implemented process model. Conclusions are drawn concerning retrieval failures and the relationship of episodic and semantic memory.} 1629 1629 abstract = {This study presents a process model of very long-term episodic memory. The process presented is a reconstructive process. The process involves application of three kinds of reconstructive strategies—component-to-context instantiation strategies, component-instantiation strategies, and context-to-context instantiation strategies. The first is used to direct search to appropriate conceptual categories in memory. The other two are used to direct search within the chosen conceptual category. A fourth type of strategy, called executive search strategies, guide search for concepts related to the one targeted for retrieval. A conceptual memory organization implied by human reconstructive memory is presented along with examples which motivate it. A basic retrieval algorithm is presented for traversing that stucture. Retrieval strategies arise from failures in that algorithm. The memory organization and retrieval processes are implemented in a computer program called CYRUS which stores events in the lives of former Secretaries of State Cyrus Vance and Edmund Muskie and answers questions posed in English concerning that information. Examples which motivate the process model are drawn from protocols of human memory search. Examples of CYRUS'S behavior demonstrate the implemented process model. Conclusions are drawn concerning retrieval failures and the relationship of episodic and semantic memory.}
} 1630 1630 }
1631 1631
@Book{Riesbeck1989, 1632 1632 @Book{Riesbeck1989,
author = {Riesbeck C.K. and Schank R.C.}, 1633 1633 author = {Riesbeck C.K. and Schank R.C.},
year = {1989}, 1634 1634 year = {1989},
title = {Inside Case-Based Reasoning}, 1635 1635 title = {Inside Case-Based Reasoning},
publisher = {Psychology Press}, 1636 1636 publisher = {Psychology Press},
url = {https://doi.org/10.4324/9780203781821} 1637 1637 url = {https://doi.org/10.4324/9780203781821}
} 1638 1638 }
1639 1639
@article{ALABDULRAHMAN2021114061, 1640 1640 @article{ALABDULRAHMAN2021114061,
title = {Catering for unique tastes: Targeting grey-sheep users recommender systems through one-class machine learning}, 1641 1641 title = {Catering for unique tastes: Targeting grey-sheep users recommender systems through one-class machine learning},
journal = {Expert Systems with Applications}, 1642 1642 journal = {Expert Systems with Applications},
volume = {166}, 1643 1643 volume = {166},
pages = {114061}, 1644 1644 pages = {114061},
year = {2021}, 1645 1645 year = {2021},
issn = {0957-4174}, 1646 1646 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2020.114061}, 1647 1647 doi = {https://doi.org/10.1016/j.eswa.2020.114061},
url = {https://www.sciencedirect.com/science/article/pii/S0957417420308241}, 1648 1648 url = {https://www.sciencedirect.com/science/article/pii/S0957417420308241},
author = {Rabaa Alabdulrahman and Herna Viktor}, 1649 1649 author = {Rabaa Alabdulrahman and Herna Viktor},
keywords = {Recommender systems, Model-based systems, Machine learning, Grey-sheep, One-class classification}, 1650 1650 keywords = {Recommender systems, Model-based systems, Machine learning, Grey-sheep, One-class classification},
abstract = {In recommendation systems, the grey-sheep problem refers to users with unique preferences and tastes that make it difficult to develop accurate profiles. That is, the similarity search approach typically followed during the recommendation process fails to yield good results. Most research does not focus on such users and thus fails to cater to more exotic tastes and emerging trends, leading to a subsequent loss in revenue and marketing opportunities. One suggested solution is to use one-class classification to generate a prediction list for these users, where decision boundaries are learned that distinguish between normal and grey-sheep users. In this paper, we present the grey-sheep one-class recommendation (GSOR) framework designed to create accurate prediction models while taking both regular and grey-sheep users into account. In addition, we introduce a novel grey-sheep movie recommendation benchmark to be used by current and future researchers. When evaluating our GSOR framework against this benchmark, our results indicate the value of combining cluster analysis, outlier detection, and one-class learning to generate relevant and timely recommendation lists from data sets that contain grey-sheep users. Specifically, by employing one-class decision tree algorithms, our GSOR framework was able to outperform traditional collaborative filtering-based recommendation systems in both accuracy and model construction time. Furthermore, we report that having grey-sheep users in the system often had a positive impact on the learning and recommendation processes.} 1651 1651 abstract = {In recommendation systems, the grey-sheep problem refers to users with unique preferences and tastes that make it difficult to develop accurate profiles. That is, the similarity search approach typically followed during the recommendation process fails to yield good results. Most research does not focus on such users and thus fails to cater to more exotic tastes and emerging trends, leading to a subsequent loss in revenue and marketing opportunities. One suggested solution is to use one-class classification to generate a prediction list for these users, where decision boundaries are learned that distinguish between normal and grey-sheep users. In this paper, we present the grey-sheep one-class recommendation (GSOR) framework designed to create accurate prediction models while taking both regular and grey-sheep users into account. In addition, we introduce a novel grey-sheep movie recommendation benchmark to be used by current and future researchers. When evaluating our GSOR framework against this benchmark, our results indicate the value of combining cluster analysis, outlier detection, and one-class learning to generate relevant and timely recommendation lists from data sets that contain grey-sheep users. Specifically, by employing one-class decision tree algorithms, our GSOR framework was able to outperform traditional collaborative filtering-based recommendation systems in both accuracy and model construction time. Furthermore, we report that having grey-sheep users in the system often had a positive impact on the learning and recommendation processes.}
} 1652 1652 }
1653 1653
@article{HU2025127130, 1654 1654 @article{HU2025127130,
title = {A social importance and category enhanced cold-start user recommendation system}, 1655 1655 title = {A social importance and category enhanced cold-start user recommendation system},
journal = {Expert Systems with Applications}, 1656 1656 journal = {Expert Systems with Applications},
volume = {277}, 1657 1657 volume = {277},
pages = {127130}, 1658 1658 pages = {127130},
year = {2025}, 1659 1659 year = {2025},
issn = {0957-4174}, 1660 1660 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2025.127130}, 1661 1661 doi = {https://doi.org/10.1016/j.eswa.2025.127130},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425007523}, 1662 1662 url = {https://www.sciencedirect.com/science/article/pii/S0957417425007523},
author = {Bin Hu and Yinghong Ma and Zhiyuan Liu and Hong Wang}, 1663 1663 author = {Bin Hu and Yinghong Ma and Zhiyuan Liu and Hong Wang},
keywords = {Social recommendation, Graph neural network, Cold-start users, Social importance, Category information}, 1664 1664 keywords = {Social recommendation, Graph neural network, Cold-start users, Social importance, Category information},
abstract = {Social recommendation, which utilizes social relations to enhance recommender systems, has gained increasing attention with the rapid development of online social platforms. Although numerous studies have underscored the efficacy of integrating personal social information to bolster the performance of such systems, social recommendations still face several problems. Firstly, the cold-start problem for items persists in recommendation tasks leveraging social information. Secondly, the importance of users within social networks is often disregarded, leading to biases in recommendation tasks utilizing social information. Thirdly, the lack of utilization of item category information makes learning representations of items and users insufficient. Hence, this paper proposes a novel social recommendation model, Social Importance and Category Enhanced Cold-Start User Recommendation System (SICERec). At first, potential preference information for cold-start users is incorporated into similar user modules, extracting user preference information from historical interaction data between users and items. After that, the significance of users within social networks is considered by integrating their centrality attributes, thereby enriching the semantic representation of users. Finally, category information of user historical interaction items is incorporated into the modeling process to enrich the semantics of items. Extensive experimental results demonstrate the significant advantages of our SICERec method. Our model exhibits a minimum improvement of 15.1% in RMSE and at least 26.2% in MAE compared to state-of-the-art models when evaluated on two real datasets. Additionally, ablation experiments are conducted to validate each module’s effectiveness and provide further insights into how users’ social attributes and preferences influence their choices. We release our code at https://github.com/BinHu129/SICERec.} 1665 1665 abstract = {Social recommendation, which utilizes social relations to enhance recommender systems, has gained increasing attention with the rapid development of online social platforms. Although numerous studies have underscored the efficacy of integrating personal social information to bolster the performance of such systems, social recommendations still face several problems. Firstly, the cold-start problem for items persists in recommendation tasks leveraging social information. Secondly, the importance of users within social networks is often disregarded, leading to biases in recommendation tasks utilizing social information. Thirdly, the lack of utilization of item category information makes learning representations of items and users insufficient. Hence, this paper proposes a novel social recommendation model, Social Importance and Category Enhanced Cold-Start User Recommendation System (SICERec). At first, potential preference information for cold-start users is incorporated into similar user modules, extracting user preference information from historical interaction data between users and items. After that, the significance of users within social networks is considered by integrating their centrality attributes, thereby enriching the semantic representation of users. Finally, category information of user historical interaction items is incorporated into the modeling process to enrich the semantics of items. Extensive experimental results demonstrate the significant advantages of our SICERec method. Our model exhibits a minimum improvement of 15.1% in RMSE and at least 26.2% in MAE compared to state-of-the-art models when evaluated on two real datasets. Additionally, ablation experiments are conducted to validate each module’s effectiveness and provide further insights into how users’ social attributes and preferences influence their choices. We release our code at https://github.com/BinHu129/SICERec.}
} 1666 1666 }
1667 1667
@inproceedings{wolf2024keep, 1668 1668 @inproceedings{wolf2024keep,
title={Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning}, 1669 1669 title={Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning},
author={Wolf, Tom Nuno and Bongratz, Fabian and Rickmann, Anne-Marie and P{\"o}lsterl, Sebastian and Wachinger, Christian}, 1670 1670 author={Wolf, Tom Nuno and Bongratz, Fabian and Rickmann, Anne-Marie and P{\"o}lsterl, Sebastian and Wachinger, Christian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, 1671 1671 booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38}, 1672 1672 volume={38},
pages={5921--5929}, 1673 1673 pages={5921--5929},
year={2024} 1674 1674 year={2024}
} 1675 1675 }
1676 1676
@article{PAREJASLLANOVARCED2024111469, 1677 1677 @article{PAREJASLLANOVARCED2024111469,
title = {Case-based selection of explanation methods for neural network image classifiers}, 1678 1678 title = {Case-based selection of explanation methods for neural network image classifiers},
journal = {Knowledge-Based Systems}, 1679 1679 journal = {Knowledge-Based Systems},
volume = {288}, 1680 1680 volume = {288},
pages = {111469}, 1681 1681 pages = {111469},
year = {2024}, 1682 1682 year = {2024},
issn = {0950-7051}, 1683 1683 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2024.111469}, 1684 1684 doi = {https://doi.org/10.1016/j.knosys.2024.111469},
url = {https://www.sciencedirect.com/science/article/pii/S0950705124001047}, 1685 1685 url = {https://www.sciencedirect.com/science/article/pii/S0950705124001047},
author = {Humberto Parejas-Llanovarced and Marta Caro-Martínez and Mauricio G. Orozco-del-Castillo and Juan A. Recio-García}, 1686 1686 author = {Humberto Parejas-Llanovarced and Marta Caro-Martínez and Mauricio G. Orozco-del-Castillo and Juan A. Recio-García},
keywords = {Case-based reasoning, Explainable artificial intelligence, Explanation methods, Explanation of image classification}, 1687 1687 keywords = {Case-based reasoning, Explainable artificial intelligence, Explanation methods, Explanation of image classification},
abstract = {Deep learning is especially remarkable in terms of image classification. However, the outcomes of models are not explainable to users due to their complex nature, having an impact on the users’ trust in the provided classifications. To solve this problem, several explanation techniques have been proposed, but they greatly depend on the nature of the images being classified and the users’ perception of the explanations. In this work, we present Case-Based Reasoning as a learning-based solution to the problem of selecting the best explanation method for the image classifications obtained by models. We propose the elicitation of a case base that reflects the human perception of the quality of the explanations and how to reuse this knowledge to select the best explanation approach for a given image classification.} 1688 1688 abstract = {Deep learning is especially remarkable in terms of image classification. However, the outcomes of models are not explainable to users due to their complex nature, having an impact on the users’ trust in the provided classifications. To solve this problem, several explanation techniques have been proposed, but they greatly depend on the nature of the images being classified and the users’ perception of the explanations. In this work, we present Case-Based Reasoning as a learning-based solution to the problem of selecting the best explanation method for the image classifications obtained by models. We propose the elicitation of a case base that reflects the human perception of the quality of the explanations and how to reuse this knowledge to select the best explanation approach for a given image classification.}
} 1689 1689 }
1690 1690
@Article{buildings13030651, 1691 1691 @Article{buildings13030651,
AUTHOR = {Uysal, Furkan and Sonmez, Rifat}, 1692 1692 AUTHOR = {Uysal, Furkan and Sonmez, Rifat},
TITLE = {Bootstrap Aggregated Case-Based Reasoning Method for Conceptual Cost Estimation}, 1693 1693 TITLE = {Bootstrap Aggregated Case-Based Reasoning Method for Conceptual Cost Estimation},
JOURNAL = {Buildings}, 1694 1694 JOURNAL = {Buildings},
VOLUME = {13}, 1695 1695 VOLUME = {13},
YEAR = {2023}, 1696 1696 YEAR = {2023},
NUMBER = {3}, 1697 1697 NUMBER = {3},
ARTICLE-NUMBER = {651}, 1698 1698 ARTICLE-NUMBER = {651},
URL = {https://www.mdpi.com/2075-5309/13/3/651}, 1699 1699 URL = {https://www.mdpi.com/2075-5309/13/3/651},
ISSN = {2075-5309}, 1700 1700 ISSN = {2075-5309},
ABSTRACT = {Conceptual cost estimation is an important step in project feasibility decisions when there is not enough information on detailed design and project requirements. Methods that enable quick and reasonably accurate conceptual cost estimates are crucial for achieving successful decisions in the early stages of construction projects. For this reason, numerous machine learning methods proposed in the literature that use different learning mechanisms. In recent years, the case-based reasoning (CBR) method has received particular attention in the literature for conceptual cost estimation of construction projects that use similarity-based learning principles. Despite the fact that CBR provides a powerful and practical alternative for conceptual cost estimation, one of the main criticisms about CBR is its low prediction performance when there is not a sufficient number of cases. This paper presents a bootstrap aggregated CBR method for achieving advancement in CBR research, particularly for conceptual cost estimation of construction projects when a limited number of training cases are available. The proposed learning method is designed so that CBR can learn from a diverse set of training data even when there are not a sufficient number of cases. The performance of the proposed bootstrap aggregated CBR method is evaluated using three data sets. The results revealed that the prediction performance of the new bootstrap aggregated CBR method is better than the prediction performance of the existing CBR method. Since the majority of conceptual cost estimates are made with a limited number of cases, the proposed method provides a contribution to CBR research and practice by improving the existing methods for conceptual cost estimating.}, 1701 1701 ABSTRACT = {Conceptual cost estimation is an important step in project feasibility decisions when there is not enough information on detailed design and project requirements. Methods that enable quick and reasonably accurate conceptual cost estimates are crucial for achieving successful decisions in the early stages of construction projects. For this reason, numerous machine learning methods proposed in the literature that use different learning mechanisms. In recent years, the case-based reasoning (CBR) method has received particular attention in the literature for conceptual cost estimation of construction projects that use similarity-based learning principles. Despite the fact that CBR provides a powerful and practical alternative for conceptual cost estimation, one of the main criticisms about CBR is its low prediction performance when there is not a sufficient number of cases. This paper presents a bootstrap aggregated CBR method for achieving advancement in CBR research, particularly for conceptual cost estimation of construction projects when a limited number of training cases are available. The proposed learning method is designed so that CBR can learn from a diverse set of training data even when there are not a sufficient number of cases. The performance of the proposed bootstrap aggregated CBR method is evaluated using three data sets. The results revealed that the prediction performance of the new bootstrap aggregated CBR method is better than the prediction performance of the existing CBR method. Since the majority of conceptual cost estimates are made with a limited number of cases, the proposed method provides a contribution to CBR research and practice by improving the existing methods for conceptual cost estimating.},
DOI = {10.3390/buildings13030651} 1702 1702 DOI = {10.3390/buildings13030651}
} 1703 1703 }
1704 1704
@article{YU2023110163, 1705 1705 @article{YU2023110163,
title = {A case-based reasoning driven ensemble learning paradigm for financial distress prediction with missing data}, 1706 1706 title = {A case-based reasoning driven ensemble learning paradigm for financial distress prediction with missing data},
journal = {Applied Soft Computing}, 1707 1707 journal = {Applied Soft Computing},
volume = {137}, 1708 1708 volume = {137},
pages = {110163}, 1709 1709 pages = {110163},
This is BibTeX, Version 0.99d (TeX Live 2023) 1 1 This is BibTeX, Version 0.99d (TeX Live 2023)
Capacity: max_strings=200000, hash_size=200000, hash_prime=170003 2 2 Capacity: max_strings=200000, hash_size=200000, hash_prime=170003
The top-level auxiliary file: main.aux 3 3 The top-level auxiliary file: main.aux
A level-1 auxiliary file: ./chapters/contexte2.aux 4 4 A level-1 auxiliary file: ./chapters/contexte2.aux
A level-1 auxiliary file: ./chapters/EIAH.aux 5 5 A level-1 auxiliary file: ./chapters/EIAH.aux
A level-1 auxiliary file: ./chapters/CBR.aux 6 6 A level-1 auxiliary file: ./chapters/CBR.aux
A level-1 auxiliary file: ./chapters/Architecture.aux 7 7 A level-1 auxiliary file: ./chapters/Architecture.aux
A level-1 auxiliary file: ./chapters/ESCBR.aux 8 8 A level-1 auxiliary file: ./chapters/ESCBR.aux
A level-1 auxiliary file: ./chapters/TS.aux 9 9 A level-1 auxiliary file: ./chapters/TS.aux
A level-1 auxiliary file: ./chapters/Conclusions.aux 10 10 A level-1 auxiliary file: ./chapters/Conclusions.aux
A level-1 auxiliary file: ./chapters/Publications.aux 11 11 A level-1 auxiliary file: ./chapters/Publications.aux
The style file: apalike.bst 12 12 The style file: apalike.bst
Database file #1: main.bib 13 13 Database file #1: main.bib
Warning--entry type for "Daubias2011" isn't style-file defined 14 14 Warning--entry type for "Daubias2011" isn't style-file defined
--line 693 of file main.bib 15 15 --line 693 of file main.bib
Warning--to sort, need author or key in UCI 16 16 Warning--to sort, need author or key in UCI
Warning--to sort, need author or key in Data 17 17 Warning--to sort, need author or key in Data
You've used 87 entries, 18 18 You've used 87 entries,
1935 wiz_defined-function locations, 19 19 1935 wiz_defined-function locations,
1016 strings with 21593 characters, 20 20 1016 strings with 21594 characters,
and the built_in function-call counts, 39102 in all, are: 21 21 and the built_in function-call counts, 39102 in all, are:
= -- 3739 22 22 = -- 3739
> -- 1860 23 23 > -- 1860
< -- 56 24 24 < -- 56
+ -- 680 25 25 + -- 680
- -- 626 26 26 - -- 626
* -- 3362 27 27 * -- 3362
:= -- 6698 28 28 := -- 6698
add.period$ -- 280 29 29 add.period$ -- 280
call.type$ -- 87 30 30 call.type$ -- 87
change.case$ -- 722 31 31 change.case$ -- 722
chr.to.int$ -- 85 32 32 chr.to.int$ -- 85
cite$ -- 91 33 33 cite$ -- 91
duplicate$ -- 1470 34 34 duplicate$ -- 1470
empty$ -- 2633 35 35 empty$ -- 2633
format.name$ -- 757 36 36 format.name$ -- 757
if$ -- 7773 37 37 if$ -- 7773
int.to.chr$ -- 3 38 38 int.to.chr$ -- 3
int.to.str$ -- 0 39 39 int.to.str$ -- 0
missing$ -- 91 40 40 missing$ -- 91
newline$ -- 440 41 41 newline$ -- 440
num.names$ -- 285 42 42 num.names$ -- 285
pop$ -- 655 43 43 pop$ -- 655
preamble$ -- 1 44 44 preamble$ -- 1
purify$ -- 728 45 45 purify$ -- 728
quote$ -- 0 46 46 quote$ -- 0
skip$ -- 1120 47 47 skip$ -- 1120
stack$ -- 0 48 48 stack$ -- 0
substring$ -- 2605 49 49 substring$ -- 2605
swap$ -- 272 50 50 swap$ -- 272
text.length$ -- 24 51 51 text.length$ -- 24
text.prefix$ -- 0 52 52 text.prefix$ -- 0
top$ -- 0 53 53 top$ -- 0
type$ -- 508 54 54 type$ -- 508
warning$ -- 2 55 55 warning$ -- 2
while$ -- 289 56 56 while$ -- 289
width$ -- 0 57 57 width$ -- 0
write$ -- 1160 58 58 write$ -- 1160
(There were 3 warnings) 59 59 (There were 3 warnings)
60 60
This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) (preloaded format=pdflatex 2023.5.31) 18 JUL 2025 12:58 1 1 This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) (preloaded format=pdflatex 2023.5.31) 18 JUL 2025 13:02
entering extended mode 2 2 entering extended mode
restricted \write18 enabled. 3 3 restricted \write18 enabled.
%&-line parsing enabled. 4 4 %&-line parsing enabled.
**main.tex 5 5 **main.tex
(./main.tex 6 6 (./main.tex
LaTeX2e <2022-11-01> patch level 1 7 7 LaTeX2e <2022-11-01> patch level 1
L3 programming layer <2023-05-22> (./spimufcphdthesis.cls 8 8 L3 programming layer <2023-05-22> (./spimufcphdthesis.cls
Document Class: spimufcphdthesis 2022/02/10 9 9 Document Class: spimufcphdthesis 2022/02/10
10 10
(/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-docum 11 11 (/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-docum
ent.cls 12 12 ent.cls
Document Class: upmethodology-document 2022/10/04 13 13 Document Class: upmethodology-document 2022/10/04
(./upmethodology-p-common.sty 14 14 (./upmethodology-p-common.sty
Package: upmethodology-p-common 2015/04/24 15 15 Package: upmethodology-p-common 2015/04/24
16 16
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/ifthen.sty 17 17 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/ifthen.sty
Package: ifthen 2022/04/13 v1.1d Standard LaTeX ifthen package (DPC) 18 18 Package: ifthen 2022/04/13 v1.1d Standard LaTeX ifthen package (DPC)
) 19 19 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/xspace.sty 20 20 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/xspace.sty
Package: xspace 2014/10/28 v1.13 Space after command names (DPC,MH) 21 21 Package: xspace 2014/10/28 v1.13 Space after command names (DPC,MH)
) 22 22 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/xcolor/xcolor.sty 23 23 (/usr/local/texlive/2023/texmf-dist/tex/latex/xcolor/xcolor.sty
Package: xcolor 2022/06/12 v2.14 LaTeX color extensions (UK) 24 24 Package: xcolor 2022/06/12 v2.14 LaTeX color extensions (UK)
25 25
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/color.cfg 26 26 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/color.cfg
File: color.cfg 2016/01/02 v1.6 sample color configuration 27 27 File: color.cfg 2016/01/02 v1.6 sample color configuration
) 28 28 )
Package xcolor Info: Driver file: pdftex.def on input line 227. 29 29 Package xcolor Info: Driver file: pdftex.def on input line 227.
30 30
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-def/pdftex.def 31 31 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-def/pdftex.def
File: pdftex.def 2022/09/22 v1.2b Graphics/color driver for pdftex 32 32 File: pdftex.def 2022/09/22 v1.2b Graphics/color driver for pdftex
) 33 33 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/mathcolor.ltx) 34 34 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/mathcolor.ltx)
Package xcolor Info: Model `cmy' substituted by `cmy0' on input line 1353. 35 35 Package xcolor Info: Model `cmy' substituted by `cmy0' on input line 1353.
Package xcolor Info: Model `hsb' substituted by `rgb' on input line 1357. 36 36 Package xcolor Info: Model `hsb' substituted by `rgb' on input line 1357.
Package xcolor Info: Model `RGB' extended on input line 1369. 37 37 Package xcolor Info: Model `RGB' extended on input line 1369.
Package xcolor Info: Model `HTML' substituted by `rgb' on input line 1371. 38 38 Package xcolor Info: Model `HTML' substituted by `rgb' on input line 1371.
Package xcolor Info: Model `Hsb' substituted by `hsb' on input line 1372. 39 39 Package xcolor Info: Model `Hsb' substituted by `hsb' on input line 1372.
Package xcolor Info: Model `tHsb' substituted by `hsb' on input line 1373. 40 40 Package xcolor Info: Model `tHsb' substituted by `hsb' on input line 1373.
Package xcolor Info: Model `HSB' substituted by `hsb' on input line 1374. 41 41 Package xcolor Info: Model `HSB' substituted by `hsb' on input line 1374.
Package xcolor Info: Model `Gray' substituted by `gray' on input line 1375. 42 42 Package xcolor Info: Model `Gray' substituted by `gray' on input line 1375.
Package xcolor Info: Model `wave' substituted by `hsb' on input line 1376. 43 43 Package xcolor Info: Model `wave' substituted by `hsb' on input line 1376.
) 44 44 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifpdf.sty 45 45 (/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifpdf.sty
Package: ifpdf 2019/10/25 v3.4 ifpdf legacy package. Use iftex instead. 46 46 Package: ifpdf 2019/10/25 v3.4 ifpdf legacy package. Use iftex instead.
47 47
(/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/iftex.sty 48 48 (/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/iftex.sty
Package: iftex 2022/02/03 v1.0f TeX engine tests 49 49 Package: iftex 2022/02/03 v1.0f TeX engine tests
)) 50 50 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/UPMVERSION.def)) 51 51 (/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/UPMVERSION.def))
*********** UPMETHODOLOGY BOOK CLASS (WITH PART AND CHAPTER) 52 52 *********** UPMETHODOLOGY BOOK CLASS (WITH PART AND CHAPTER)
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/book.cls 53 53 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/book.cls
Document Class: book 2022/07/02 v1.4n Standard LaTeX document class 54 54 Document Class: book 2022/07/02 v1.4n Standard LaTeX document class
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/bk11.clo 55 55 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/bk11.clo
File: bk11.clo 2022/07/02 v1.4n Standard LaTeX file (size option) 56 56 File: bk11.clo 2022/07/02 v1.4n Standard LaTeX file (size option)
) 57 57 )
\c@part=\count185 58 58 \c@part=\count185
\c@chapter=\count186 59 59 \c@chapter=\count186
\c@section=\count187 60 60 \c@section=\count187
\c@subsection=\count188 61 61 \c@subsection=\count188
\c@subsubsection=\count189 62 62 \c@subsubsection=\count189
\c@paragraph=\count190 63 63 \c@paragraph=\count190
\c@subparagraph=\count191 64 64 \c@subparagraph=\count191
\c@figure=\count192 65 65 \c@figure=\count192
\c@table=\count193 66 66 \c@table=\count193
\abovecaptionskip=\skip48 67 67 \abovecaptionskip=\skip48
\belowcaptionskip=\skip49 68 68 \belowcaptionskip=\skip49
\bibindent=\dimen140 69 69 \bibindent=\dimen140
) 70 70 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/a4wide/a4wide.sty 71 71 (/usr/local/texlive/2023/texmf-dist/tex/latex/a4wide/a4wide.sty
Package: a4wide 1994/08/30 72 72 Package: a4wide 1994/08/30
73 73
(/usr/local/texlive/2023/texmf-dist/tex/latex/ntgclass/a4.sty 74 74 (/usr/local/texlive/2023/texmf-dist/tex/latex/ntgclass/a4.sty
Package: a4 2023/01/10 v1.2g A4 based page layout 75 75 Package: a4 2023/01/10 v1.2g A4 based page layout
)) 76 76 ))
(./upmethodology-document.sty 77 77 (./upmethodology-document.sty
Package: upmethodology-document 2015/04/24 78 78 Package: upmethodology-document 2015/04/24
79 79
**** upmethodology-document is using French language **** 80 80 **** upmethodology-document is using French language ****
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel/babel.sty 81 81 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel/babel.sty
Package: babel 2023/05/11 v3.89 The Babel package 82 82 Package: babel 2023/05/11 v3.89 The Babel package
\babel@savecnt=\count194 83 83 \babel@savecnt=\count194
\U@D=\dimen141 84 84 \U@D=\dimen141
\l@unhyphenated=\language87 85 85 \l@unhyphenated=\language87
86 86
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel/txtbabel.def) 87 87 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel/txtbabel.def)
\bbl@readstream=\read2 88 88 \bbl@readstream=\read2
\bbl@dirlevel=\count195 89 89 \bbl@dirlevel=\count195
90 90
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf 91 91 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf
Language: french 2023/03/08 v3.5q French support from the babel system 92 92 Language: french 2023/03/08 v3.5q French support from the babel system
Package babel Info: Hyphen rules for 'acadian' set to \l@french 93 93 Package babel Info: Hyphen rules for 'acadian' set to \l@french
(babel) (\language29). Reported on input line 91. 94 94 (babel) (\language29). Reported on input line 91.
Package babel Info: Hyphen rules for 'canadien' set to \l@french 95 95 Package babel Info: Hyphen rules for 'canadien' set to \l@french
(babel) (\language29). Reported on input line 92. 96 96 (babel) (\language29). Reported on input line 92.
\FB@nonchar=\count196 97 97 \FB@nonchar=\count196
Package babel Info: Making : an active character on input line 395. 98 98 Package babel Info: Making : an active character on input line 395.
Package babel Info: Making ; an active character on input line 396. 99 99 Package babel Info: Making ; an active character on input line 396.
Package babel Info: Making ! an active character on input line 397. 100 100 Package babel Info: Making ! an active character on input line 397.
Package babel Info: Making ? an active character on input line 398. 101 101 Package babel Info: Making ? an active character on input line 398.
\FBguill@level=\count197 102 102 \FBguill@level=\count197
\FBold@everypar=\toks16 103 103 \FBold@everypar=\toks16
\FB@Mht=\dimen142 104 104 \FB@Mht=\dimen142
\mc@charclass=\count198 105 105 \mc@charclass=\count198
\mc@charfam=\count199 106 106 \mc@charfam=\count199
\mc@charslot=\count266 107 107 \mc@charslot=\count266
\std@mcc=\count267 108 108 \std@mcc=\count267
\dec@mcc=\count268 109 109 \dec@mcc=\count268
\FB@parskip=\dimen143 110 110 \FB@parskip=\dimen143
\listindentFB=\dimen144 111 111 \listindentFB=\dimen144
\descindentFB=\dimen145 112 112 \descindentFB=\dimen145
\labelindentFB=\dimen146 113 113 \labelindentFB=\dimen146
\labelwidthFB=\dimen147 114 114 \labelwidthFB=\dimen147
\leftmarginFB=\dimen148 115 115 \leftmarginFB=\dimen148
\parindentFFN=\dimen149 116 116 \parindentFFN=\dimen149
\FBfnindent=\dimen150 117 117 \FBfnindent=\dimen150
) 118 118 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/frenchb.ldf 119 119 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/frenchb.ldf
Language: frenchb 2023/03/08 v3.5q French support from the babel system 120 120 Language: frenchb 2023/03/08 v3.5q French support from the babel system
121 121
122 122
Package babel-french Warning: Option `frenchb' for Babel is *deprecated*, 123 123 Package babel-french Warning: Option `frenchb' for Babel is *deprecated*,
(babel-french) it might be removed sooner or later. Please 124 124 (babel-french) it might be removed sooner or later. Please
(babel-french) use `french' instead; reported on input line 35. 125 125 (babel-french) use `french' instead; reported on input line 35.
126 126
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf 127 127 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf
Language: french 2023/03/08 v3.5q French support from the babel system 128 128 Language: french 2023/03/08 v3.5q French support from the babel system
))) 129 129 )))
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel/locale/fr/babel-french.te 130 130 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel/locale/fr/babel-french.te
x 131 131 x
Package babel Info: Importing font and identification data for french 132 132 Package babel Info: Importing font and identification data for french
(babel) from babel-fr.ini. Reported on input line 11. 133 133 (babel) from babel-fr.ini. Reported on input line 11.
) (/usr/local/texlive/2023/texmf-dist/tex/latex/carlisle/scalefnt.sty) 134 134 ) (/usr/local/texlive/2023/texmf-dist/tex/latex/carlisle/scalefnt.sty)
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/keyval.sty 135 135 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/keyval.sty
Package: keyval 2022/05/29 v1.15 key=value parser (DPC) 136 136 Package: keyval 2022/05/29 v1.15 key=value parser (DPC)
\KV@toks@=\toks17 137 137 \KV@toks@=\toks17
) 138 138 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/vmargin/vmargin.sty 139 139 (/usr/local/texlive/2023/texmf-dist/tex/latex/vmargin/vmargin.sty
Package: vmargin 2004/07/15 V2.5 set document margins (VK) 140 140 Package: vmargin 2004/07/15 V2.5 set document margins (VK)
141 141
Package: vmargin 2004/07/15 V2.5 set document margins (VK) 142 142 Package: vmargin 2004/07/15 V2.5 set document margins (VK)
\PaperWidth=\dimen151 143 143 \PaperWidth=\dimen151
\PaperHeight=\dimen152 144 144 \PaperHeight=\dimen152
) (./upmethodology-extension.sty 145 145 ) (./upmethodology-extension.sty
Package: upmethodology-extension 2012/09/21 146 146 Package: upmethodology-extension 2012/09/21
\upmext@tmp@putx=\skip50 147 147 \upmext@tmp@putx=\skip50
148 148
*** define extension value frontillustrationsize **** 149 149 *** define extension value frontillustrationsize ****
*** define extension value watermarksize **** 150 150 *** define extension value watermarksize ****
*** undefine extension value publisher **** 151 151 *** undefine extension value publisher ****
*** undefine extension value copyrighter **** 152 152 *** undefine extension value copyrighter ****
*** undefine extension value printedin ****) 153 153 *** undefine extension value printedin ****)
(/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-fmt.s 154 154 (/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-fmt.s
ty 155 155 ty
Package: upmethodology-fmt 2022/10/04 156 156 Package: upmethodology-fmt 2022/10/04
**** upmethodology-fmt is using French language **** 157 157 **** upmethodology-fmt is using French language ****
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphicx.sty 158 158 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphicx.sty
Package: graphicx 2021/09/16 v1.2d Enhanced LaTeX Graphics (DPC,SPQR) 159 159 Package: graphicx 2021/09/16 v1.2d Enhanced LaTeX Graphics (DPC,SPQR)
160 160
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphics.sty 161 161 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphics.sty
Package: graphics 2022/03/10 v1.4e Standard LaTeX Graphics (DPC,SPQR) 162 162 Package: graphics 2022/03/10 v1.4e Standard LaTeX Graphics (DPC,SPQR)
163 163
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/trig.sty 164 164 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/trig.sty
Package: trig 2021/08/11 v1.11 sin cos tan (DPC) 165 165 Package: trig 2021/08/11 v1.11 sin cos tan (DPC)
) 166 166 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/graphics.cfg 167 167 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/graphics.cfg
File: graphics.cfg 2016/06/04 v1.11 sample graphics configuration 168 168 File: graphics.cfg 2016/06/04 v1.11 sample graphics configuration
) 169 169 )
Package graphics Info: Driver file: pdftex.def on input line 107. 170 170 Package graphics Info: Driver file: pdftex.def on input line 107.
) 171 171 )
\Gin@req@height=\dimen153 172 172 \Gin@req@height=\dimen153
\Gin@req@width=\dimen154 173 173 \Gin@req@width=\dimen154
) 174 174 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/caption/subcaption.sty 175 175 (/usr/local/texlive/2023/texmf-dist/tex/latex/caption/subcaption.sty
Package: subcaption 2023/02/19 v1.6 Sub-captions (AR) 176 176 Package: subcaption 2023/02/19 v1.6 Sub-captions (AR)
177 177
(/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption.sty 178 178 (/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption.sty
Package: caption 2023/03/12 v3.6j Customizing captions (AR) 179 179 Package: caption 2023/03/12 v3.6j Customizing captions (AR)
180 180
(/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption3.sty 181 181 (/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption3.sty
Package: caption3 2023/03/12 v2.4 caption3 kernel (AR) 182 182 Package: caption3 2023/03/12 v2.4 caption3 kernel (AR)
\caption@tempdima=\dimen155 183 183 \caption@tempdima=\dimen155
\captionmargin=\dimen156 184 184 \captionmargin=\dimen156
\caption@leftmargin=\dimen157 185 185 \caption@leftmargin=\dimen157
\caption@rightmargin=\dimen158 186 186 \caption@rightmargin=\dimen158
\caption@width=\dimen159 187 187 \caption@width=\dimen159
\caption@indent=\dimen160 188 188 \caption@indent=\dimen160
\caption@parindent=\dimen161 189 189 \caption@parindent=\dimen161
\caption@hangindent=\dimen162 190 190 \caption@hangindent=\dimen162
Package caption Info: Standard document class detected. 191 191 Package caption Info: Standard document class detected.
Package caption Info: french babel package is loaded. 192 192 Package caption Info: french babel package is loaded.
) 193 193 )
\c@caption@flags=\count269 194 194 \c@caption@flags=\count269
\c@continuedfloat=\count270 195 195 \c@continuedfloat=\count270
) 196 196 )
Package caption Info: New subtype `subfigure' on input line 239. 197 197 Package caption Info: New subtype `subfigure' on input line 239.
\c@subfigure=\count271 198 198 \c@subfigure=\count271
Package caption Info: New subtype `subtable' on input line 239. 199 199 Package caption Info: New subtype `subtable' on input line 239.
\c@subtable=\count272 200 200 \c@subtable=\count272
) 201 201 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/tabularx.sty 202 202 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/tabularx.sty
Package: tabularx 2020/01/15 v2.11c `tabularx' package (DPC) 203 203 Package: tabularx 2020/01/15 v2.11c `tabularx' package (DPC)
204 204
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/array.sty 205 205 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/array.sty
Package: array 2022/09/04 v2.5g Tabular extension package (FMi) 206 206 Package: array 2022/09/04 v2.5g Tabular extension package (FMi)
\col@sep=\dimen163 207 207 \col@sep=\dimen163
\ar@mcellbox=\box51 208 208 \ar@mcellbox=\box51
\extrarowheight=\dimen164 209 209 \extrarowheight=\dimen164
\NC@list=\toks18 210 210 \NC@list=\toks18
\extratabsurround=\skip51 211 211 \extratabsurround=\skip51
\backup@length=\skip52 212 212 \backup@length=\skip52
\ar@cellbox=\box52 213 213 \ar@cellbox=\box52
) 214 214 )
\TX@col@width=\dimen165 215 215 \TX@col@width=\dimen165
\TX@old@table=\dimen166 216 216 \TX@old@table=\dimen166
\TX@old@col=\dimen167 217 217 \TX@old@col=\dimen167
\TX@target=\dimen168 218 218 \TX@target=\dimen168
\TX@delta=\dimen169 219 219 \TX@delta=\dimen169
\TX@cols=\count273 220 220 \TX@cols=\count273
\TX@ftn=\toks19 221 221 \TX@ftn=\toks19
) 222 222 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/multicol.sty 223 223 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/multicol.sty
Package: multicol 2021/11/30 v1.9d multicolumn formatting (FMi) 224 224 Package: multicol 2021/11/30 v1.9d multicolumn formatting (FMi)
\c@tracingmulticols=\count274 225 225 \c@tracingmulticols=\count274
\mult@box=\box53 226 226 \mult@box=\box53
\multicol@leftmargin=\dimen170 227 227 \multicol@leftmargin=\dimen170
\c@unbalance=\count275 228 228 \c@unbalance=\count275
\c@collectmore=\count276 229 229 \c@collectmore=\count276
\doublecol@number=\count277 230 230 \doublecol@number=\count277
\multicoltolerance=\count278 231 231 \multicoltolerance=\count278
\multicolpretolerance=\count279 232 232 \multicolpretolerance=\count279
\full@width=\dimen171 233 233 \full@width=\dimen171
\page@free=\dimen172 234 234 \page@free=\dimen172
\premulticols=\dimen173 235 235 \premulticols=\dimen173
\postmulticols=\dimen174 236 236 \postmulticols=\dimen174
\multicolsep=\skip53 237 237 \multicolsep=\skip53
\multicolbaselineskip=\skip54 238 238 \multicolbaselineskip=\skip54
\partial@page=\box54 239 239 \partial@page=\box54
\last@line=\box55 240 240 \last@line=\box55
\maxbalancingoverflow=\dimen175 241 241 \maxbalancingoverflow=\dimen175
\mult@rightbox=\box56 242 242 \mult@rightbox=\box56
\mult@grightbox=\box57 243 243 \mult@grightbox=\box57
\mult@firstbox=\box58 244 244 \mult@firstbox=\box58
\mult@gfirstbox=\box59 245 245 \mult@gfirstbox=\box59
\@tempa=\box60 246 246 \@tempa=\box60
\@tempa=\box61 247 247 \@tempa=\box61
\@tempa=\box62 248 248 \@tempa=\box62
\@tempa=\box63 249 249 \@tempa=\box63
\@tempa=\box64 250 250 \@tempa=\box64
\@tempa=\box65 251 251 \@tempa=\box65
\@tempa=\box66 252 252 \@tempa=\box66
\@tempa=\box67 253 253 \@tempa=\box67
\@tempa=\box68 254 254 \@tempa=\box68
\@tempa=\box69 255 255 \@tempa=\box69
\@tempa=\box70 256 256 \@tempa=\box70
\@tempa=\box71 257 257 \@tempa=\box71
\@tempa=\box72 258 258 \@tempa=\box72
\@tempa=\box73 259 259 \@tempa=\box73
\@tempa=\box74 260 260 \@tempa=\box74
\@tempa=\box75 261 261 \@tempa=\box75
\@tempa=\box76 262 262 \@tempa=\box76
\@tempa=\box77 263 263 \@tempa=\box77
\@tempa=\box78 264 264 \@tempa=\box78
\@tempa=\box79 265 265 \@tempa=\box79
\@tempa=\box80 266 266 \@tempa=\box80
\@tempa=\box81 267 267 \@tempa=\box81
\@tempa=\box82 268 268 \@tempa=\box82
\@tempa=\box83 269 269 \@tempa=\box83
\@tempa=\box84 270 270 \@tempa=\box84
\@tempa=\box85 271 271 \@tempa=\box85
\@tempa=\box86 272 272 \@tempa=\box86
\@tempa=\box87 273 273 \@tempa=\box87
\@tempa=\box88 274 274 \@tempa=\box88
\@tempa=\box89 275 275 \@tempa=\box89
\@tempa=\box90 276 276 \@tempa=\box90
\@tempa=\box91 277 277 \@tempa=\box91
\@tempa=\box92 278 278 \@tempa=\box92
\@tempa=\box93 279 279 \@tempa=\box93
\@tempa=\box94 280 280 \@tempa=\box94
\@tempa=\box95 281 281 \@tempa=\box95
\c@minrows=\count280 282 282 \c@minrows=\count280
\c@columnbadness=\count281 283 283 \c@columnbadness=\count281
\c@finalcolumnbadness=\count282 284 284 \c@finalcolumnbadness=\count282
\last@try=\dimen176 285 285 \last@try=\dimen176
\multicolovershoot=\dimen177 286 286 \multicolovershoot=\dimen177
\multicolundershoot=\dimen178 287 287 \multicolundershoot=\dimen178
\mult@nat@firstbox=\box96 288 288 \mult@nat@firstbox=\box96
\colbreak@box=\box97 289 289 \colbreak@box=\box97
\mc@col@check@num=\count283 290 290 \mc@col@check@num=\count283
) 291 291 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/colortbl/colortbl.sty 292 292 (/usr/local/texlive/2023/texmf-dist/tex/latex/colortbl/colortbl.sty
Package: colortbl 2022/06/20 v1.0f Color table columns (DPC) 293 293 Package: colortbl 2022/06/20 v1.0f Color table columns (DPC)
\everycr=\toks20 294 294 \everycr=\toks20
\minrowclearance=\skip55 295 295 \minrowclearance=\skip55
\rownum=\count284 296 296 \rownum=\count284
) 297 297 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/picinpar/picinpar.sty 298 298 (/usr/local/texlive/2023/texmf-dist/tex/latex/picinpar/picinpar.sty
Pictures in Paragraphs. Version 1.3, November 22, 2022 299 299 Pictures in Paragraphs. Version 1.3, November 22, 2022
\br=\count285 300 300 \br=\count285
\bl=\count286 301 301 \bl=\count286
\na=\count287 302 302 \na=\count287
\nb=\count288 303 303 \nb=\count288
\tcdsav=\count289 304 304 \tcdsav=\count289
\tcl=\count290 305 305 \tcl=\count290
\tcd=\count291 306 306 \tcd=\count291
\tcn=\count292 307 307 \tcn=\count292
\cumtcl=\count293 308 308 \cumtcl=\count293
\cumpartcl=\count294 309 309 \cumpartcl=\count294
\lftside=\dimen179 310 310 \lftside=\dimen179
\rtside=\dimen180 311 311 \rtside=\dimen180
\hpic=\dimen181 312 312 \hpic=\dimen181
\vpic=\dimen182 313 313 \vpic=\dimen182
\strutilg=\dimen183 314 314 \strutilg=\dimen183
\picwd=\dimen184 315 315 \picwd=\dimen184
\topheight=\dimen185 316 316 \topheight=\dimen185
\ilg=\dimen186 317 317 \ilg=\dimen186
\lpic=\dimen187 318 318 \lpic=\dimen187
\lwindowsep=\dimen188 319 319 \lwindowsep=\dimen188
\rwindowsep=\dimen189 320 320 \rwindowsep=\dimen189
\cumpar=\dimen190 321 321 \cumpar=\dimen190
\twa=\toks21 322 322 \twa=\toks21
\la=\toks22 323 323 \la=\toks22
\ra=\toks23 324 324 \ra=\toks23
\ha=\toks24 325 325 \ha=\toks24
\pictoc=\toks25 326 326 \pictoc=\toks25
\rawtext=\box98 327 327 \rawtext=\box98
\holder=\box99 328 328 \holder=\box99
\windowbox=\box100 329 329 \windowbox=\box100
\wartext=\box101 330 330 \wartext=\box101
\finaltext=\box102 331 331 \finaltext=\box102
\aslice=\box103 332 332 \aslice=\box103
\bslice=\box104 333 333 \bslice=\box104
\wbox=\box105 334 334 \wbox=\box105
\wstrutbox=\box106 335 335 \wstrutbox=\box106
\picbox=\box107 336 336 \picbox=\box107
\waslice=\box108 337 337 \waslice=\box108
\wbslice=\box109 338 338 \wbslice=\box109
\fslice=\box110 339 339 \fslice=\box110
) (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsmath.sty 340 340 ) (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsmath.sty
Package: amsmath 2022/04/08 v2.17n AMS math features 341 341 Package: amsmath 2022/04/08 v2.17n AMS math features
\@mathmargin=\skip56 342 342 \@mathmargin=\skip56
343 343
For additional information on amsmath, use the `?' option. 344 344 For additional information on amsmath, use the `?' option.
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amstext.sty 345 345 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amstext.sty
Package: amstext 2021/08/26 v2.01 AMS text 346 346 Package: amstext 2021/08/26 v2.01 AMS text
347 347
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsgen.sty 348 348 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsgen.sty
File: amsgen.sty 1999/11/30 v2.0 generic functions 349 349 File: amsgen.sty 1999/11/30 v2.0 generic functions
\@emptytoks=\toks26 350 350 \@emptytoks=\toks26
\ex@=\dimen191 351 351 \ex@=\dimen191
)) 352 352 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsbsy.sty 353 353 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsbsy.sty
Package: amsbsy 1999/11/29 v1.2d Bold Symbols 354 354 Package: amsbsy 1999/11/29 v1.2d Bold Symbols
\pmbraise@=\dimen192 355 355 \pmbraise@=\dimen192
) 356 356 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsopn.sty 357 357 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsopn.sty
Package: amsopn 2022/04/08 v2.04 operator names 358 358 Package: amsopn 2022/04/08 v2.04 operator names
) 359 359 )
\inf@bad=\count295 360 360 \inf@bad=\count295
LaTeX Info: Redefining \frac on input line 234. 361 361 LaTeX Info: Redefining \frac on input line 234.
\uproot@=\count296 362 362 \uproot@=\count296
\leftroot@=\count297 363 363 \leftroot@=\count297
LaTeX Info: Redefining \overline on input line 399. 364 364 LaTeX Info: Redefining \overline on input line 399.
LaTeX Info: Redefining \colon on input line 410. 365 365 LaTeX Info: Redefining \colon on input line 410.
\classnum@=\count298 366 366 \classnum@=\count298
\DOTSCASE@=\count299 367 367 \DOTSCASE@=\count299
LaTeX Info: Redefining \ldots on input line 496. 368 368 LaTeX Info: Redefining \ldots on input line 496.
LaTeX Info: Redefining \dots on input line 499. 369 369 LaTeX Info: Redefining \dots on input line 499.
LaTeX Info: Redefining \cdots on input line 620. 370 370 LaTeX Info: Redefining \cdots on input line 620.
\Mathstrutbox@=\box111 371 371 \Mathstrutbox@=\box111
\strutbox@=\box112 372 372 \strutbox@=\box112
LaTeX Info: Redefining \big on input line 722. 373 373 LaTeX Info: Redefining \big on input line 722.
LaTeX Info: Redefining \Big on input line 723. 374 374 LaTeX Info: Redefining \Big on input line 723.
LaTeX Info: Redefining \bigg on input line 724. 375 375 LaTeX Info: Redefining \bigg on input line 724.
LaTeX Info: Redefining \Bigg on input line 725. 376 376 LaTeX Info: Redefining \Bigg on input line 725.
\big@size=\dimen193 377 377 \big@size=\dimen193
LaTeX Font Info: Redeclaring font encoding OML on input line 743. 378 378 LaTeX Font Info: Redeclaring font encoding OML on input line 743.
LaTeX Font Info: Redeclaring font encoding OMS on input line 744. 379 379 LaTeX Font Info: Redeclaring font encoding OMS on input line 744.
\macc@depth=\count300 380 380 \macc@depth=\count300
LaTeX Info: Redefining \bmod on input line 905. 381 381 LaTeX Info: Redefining \bmod on input line 905.
LaTeX Info: Redefining \pmod on input line 910. 382 382 LaTeX Info: Redefining \pmod on input line 910.
LaTeX Info: Redefining \smash on input line 940. 383 383 LaTeX Info: Redefining \smash on input line 940.
LaTeX Info: Redefining \relbar on input line 970. 384 384 LaTeX Info: Redefining \relbar on input line 970.
LaTeX Info: Redefining \Relbar on input line 971. 385 385 LaTeX Info: Redefining \Relbar on input line 971.
\c@MaxMatrixCols=\count301 386 386 \c@MaxMatrixCols=\count301
\dotsspace@=\muskip16 387 387 \dotsspace@=\muskip16
\c@parentequation=\count302 388 388 \c@parentequation=\count302
\dspbrk@lvl=\count303 389 389 \dspbrk@lvl=\count303
\tag@help=\toks27 390 390 \tag@help=\toks27
\row@=\count304 391 391 \row@=\count304
\column@=\count305 392 392 \column@=\count305
\maxfields@=\count306 393 393 \maxfields@=\count306
\andhelp@=\toks28 394 394 \andhelp@=\toks28
\eqnshift@=\dimen194 395 395 \eqnshift@=\dimen194
\alignsep@=\dimen195 396 396 \alignsep@=\dimen195
\tagshift@=\dimen196 397 397 \tagshift@=\dimen196
\tagwidth@=\dimen197 398 398 \tagwidth@=\dimen197
\totwidth@=\dimen198 399 399 \totwidth@=\dimen198
\lineht@=\dimen199 400 400 \lineht@=\dimen199
\@envbody=\toks29 401 401 \@envbody=\toks29
\multlinegap=\skip57 402 402 \multlinegap=\skip57
\multlinetaggap=\skip58 403 403 \multlinetaggap=\skip58
\mathdisplay@stack=\toks30 404 404 \mathdisplay@stack=\toks30
LaTeX Info: Redefining \[ on input line 2953. 405 405 LaTeX Info: Redefining \[ on input line 2953.
LaTeX Info: Redefining \] on input line 2954. 406 406 LaTeX Info: Redefining \] on input line 2954.
) 407 407 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/amscls/amsthm.sty 408 408 (/usr/local/texlive/2023/texmf-dist/tex/latex/amscls/amsthm.sty
Package: amsthm 2020/05/29 v2.20.6 409 409 Package: amsthm 2020/05/29 v2.20.6
\thm@style=\toks31 410 410 \thm@style=\toks31
\thm@bodyfont=\toks32 411 411 \thm@bodyfont=\toks32
\thm@headfont=\toks33 412 412 \thm@headfont=\toks33
\thm@notefont=\toks34 413 413 \thm@notefont=\toks34
\thm@headpunct=\toks35 414 414 \thm@headpunct=\toks35
\thm@preskip=\skip59 415 415 \thm@preskip=\skip59
\thm@postskip=\skip60 416 416 \thm@postskip=\skip60
\thm@headsep=\skip61 417 417 \thm@headsep=\skip61
\dth@everypar=\toks36 418 418 \dth@everypar=\toks36
) 419 419 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thmtools.sty 420 420 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thmtools.sty
Package: thmtools 2023/05/04 v0.76 421 421 Package: thmtools 2023/05/04 v0.76
\thmt@toks=\toks37 422 422 \thmt@toks=\toks37
\c@thmt@dummyctr=\count307 423 423 \c@thmt@dummyctr=\count307
424 424
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-patch.sty 425 425 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-patch.sty
Package: thm-patch 2023/05/04 v0.76 426 426 Package: thm-patch 2023/05/04 v0.76
427 427
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/parseargs.sty 428 428 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/parseargs.sty
Package: parseargs 2023/05/04 v0.76 429 429 Package: parseargs 2023/05/04 v0.76
\@parsespec=\toks38 430 430 \@parsespec=\toks38
)) 431 431 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-kv.sty 432 432 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-kv.sty
Package: thm-kv 2023/05/04 v0.76 433 433 Package: thm-kv 2023/05/04 v0.76
Package thm-kv Info: Theorem names will be uppercased on input line 42. 434 434 Package thm-kv Info: Theorem names will be uppercased on input line 42.
435 435
(/usr/local/texlive/2023/texmf-dist/tex/latex/kvsetkeys/kvsetkeys.sty 436 436 (/usr/local/texlive/2023/texmf-dist/tex/latex/kvsetkeys/kvsetkeys.sty
Package: kvsetkeys 2022-10-05 v1.19 Key value parser (HO) 437 437 Package: kvsetkeys 2022-10-05 v1.19 Key value parser (HO)
) 438 438 )
Package thm-kv Info: kvsetkeys patch (v1.16 or later) on input line 158. 439 439 Package thm-kv Info: kvsetkeys patch (v1.16 or later) on input line 158.
) 440 440 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-autoref.sty 441 441 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-autoref.sty
Package: thm-autoref 2023/05/04 v0.76 442 442 Package: thm-autoref 2023/05/04 v0.76
443 443
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/aliasctr.sty 444 444 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/aliasctr.sty
Package: aliasctr 2023/05/04 v0.76 445 445 Package: aliasctr 2023/05/04 v0.76
)) 446 446 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-listof.sty 447 447 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-listof.sty
Package: thm-listof 2023/05/04 v0.76 448 448 Package: thm-listof 2023/05/04 v0.76
) 449 449 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-restate.sty 450 450 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-restate.sty
Package: thm-restate 2023/05/04 v0.76 451 451 Package: thm-restate 2023/05/04 v0.76
) 452 452 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-amsthm.sty 453 453 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-amsthm.sty
Package: thm-amsthm 2023/05/04 v0.76 454 454 Package: thm-amsthm 2023/05/04 v0.76
\thmt@style@headstyle=\toks39 455 455 \thmt@style@headstyle=\toks39
)) 456 456 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/pifont.sty 457 457 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/pifont.sty
Package: pifont 2020/03/25 PSNFSS-v9.3 Pi font support (SPQR) 458 458 Package: pifont 2020/03/25 PSNFSS-v9.3 Pi font support (SPQR)
LaTeX Font Info: Trying to load font information for U+pzd on input line 63. 459 459 LaTeX Font Info: Trying to load font information for U+pzd on input line 63.
460 460
461 461
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upzd.fd 462 462 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upzd.fd
File: upzd.fd 2001/06/04 font definitions for U/pzd. 463 463 File: upzd.fd 2001/06/04 font definitions for U/pzd.
) 464 464 )
LaTeX Font Info: Trying to load font information for U+psy on input line 64. 465 465 LaTeX Font Info: Trying to load font information for U+psy on input line 64.
466 466
467 467
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upsy.fd 468 468 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upsy.fd
File: upsy.fd 2001/06/04 font definitions for U/psy. 469 469 File: upsy.fd 2001/06/04 font definitions for U/psy.
)) 470 470 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/setspace/setspace.sty 471 471 (/usr/local/texlive/2023/texmf-dist/tex/latex/setspace/setspace.sty
Package: setspace 2022/12/04 v6.7b set line spacing 472 472 Package: setspace 2022/12/04 v6.7b set line spacing
) 473 473 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/varioref.sty 474 474 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/varioref.sty
Package: varioref 2022/01/09 v1.6f package for extended references (FMi) 475 475 Package: varioref 2022/01/09 v1.6f package for extended references (FMi)
\c@vrcnt=\count308 476 476 \c@vrcnt=\count308
) 477 477 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/txfonts.sty 478 478 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/txfonts.sty
Package: txfonts 2008/01/22 v3.2.1 479 479 Package: txfonts 2008/01/22 v3.2.1
LaTeX Font Info: Redeclaring symbol font `operators' on input line 21. 480 480 LaTeX Font Info: Redeclaring symbol font `operators' on input line 21.
LaTeX Font Info: Overwriting symbol font `operators' in version `normal' 481 481 LaTeX Font Info: Overwriting symbol font `operators' in version `normal'
(Font) OT1/cmr/m/n --> OT1/txr/m/n on input line 21. 482 482 (Font) OT1/cmr/m/n --> OT1/txr/m/n on input line 21.
LaTeX Font Info: Overwriting symbol font `operators' in version `bold' 483 483 LaTeX Font Info: Overwriting symbol font `operators' in version `bold'
(Font) OT1/cmr/bx/n --> OT1/txr/m/n on input line 21. 484 484 (Font) OT1/cmr/bx/n --> OT1/txr/m/n on input line 21.
LaTeX Font Info: Overwriting symbol font `operators' in version `bold' 485 485 LaTeX Font Info: Overwriting symbol font `operators' in version `bold'
(Font) OT1/txr/m/n --> OT1/txr/bx/n on input line 22. 486 486 (Font) OT1/txr/m/n --> OT1/txr/bx/n on input line 22.
\symitalic=\mathgroup4 487 487 \symitalic=\mathgroup4
LaTeX Font Info: Overwriting symbol font `italic' in version `bold' 488 488 LaTeX Font Info: Overwriting symbol font `italic' in version `bold'
(Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 26. 489 489 (Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 26.
LaTeX Font Info: Redeclaring math alphabet \mathbf on input line 29. 490 490 LaTeX Font Info: Redeclaring math alphabet \mathbf on input line 29.
LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal' 491 491 LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal'
(Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29. 492 492 (Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29.
LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `bold' 493 493 LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `bold'
(Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29. 494 494 (Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29.
LaTeX Font Info: Redeclaring math alphabet \mathit on input line 30. 495 495 LaTeX Font Info: Redeclaring math alphabet \mathit on input line 30.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal' 496 496 LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal'
(Font) OT1/cmr/m/it --> OT1/txr/m/it on input line 30. 497 497 (Font) OT1/cmr/m/it --> OT1/txr/m/it on input line 30.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold' 498 498 LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold'
(Font) OT1/cmr/bx/it --> OT1/txr/m/it on input line 30. 499 499 (Font) OT1/cmr/bx/it --> OT1/txr/m/it on input line 30.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold' 500 500 LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold'
(Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 31. 501 501 (Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 31.
LaTeX Font Info: Redeclaring math alphabet \mathsf on input line 40. 502 502 LaTeX Font Info: Redeclaring math alphabet \mathsf on input line 40.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal' 503 503 LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal'
(Font) OT1/cmss/m/n --> OT1/txss/m/n on input line 40. 504 504 (Font) OT1/cmss/m/n --> OT1/txss/m/n on input line 40.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' 505 505 LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold'
(Font) OT1/cmss/bx/n --> OT1/txss/m/n on input line 40. 506 506 (Font) OT1/cmss/bx/n --> OT1/txss/m/n on input line 40.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' 507 507 LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold'
(Font) OT1/txss/m/n --> OT1/txss/b/n on input line 41. 508 508 (Font) OT1/txss/m/n --> OT1/txss/b/n on input line 41.
LaTeX Font Info: Redeclaring math alphabet \mathtt on input line 50. 509 509 LaTeX Font Info: Redeclaring math alphabet \mathtt on input line 50.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal' 510 510 LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal'
(Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50. 511 511 (Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' 512 512 LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold'
(Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50. 513 513 (Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' 514 514 LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold'
(Font) OT1/txtt/m/n --> OT1/txtt/b/n on input line 51. 515 515 (Font) OT1/txtt/m/n --> OT1/txtt/b/n on input line 51.
LaTeX Font Info: Redeclaring symbol font `letters' on input line 58. 516 516 LaTeX Font Info: Redeclaring symbol font `letters' on input line 58.
LaTeX Font Info: Overwriting symbol font `letters' in version `normal' 517 517 LaTeX Font Info: Overwriting symbol font `letters' in version `normal'
(Font) OML/cmm/m/it --> OML/txmi/m/it on input line 58. 518 518 (Font) OML/cmm/m/it --> OML/txmi/m/it on input line 58.
LaTeX Font Info: Overwriting symbol font `letters' in version `bold' 519 519 LaTeX Font Info: Overwriting symbol font `letters' in version `bold'
(Font) OML/cmm/b/it --> OML/txmi/m/it on input line 58. 520 520 (Font) OML/cmm/b/it --> OML/txmi/m/it on input line 58.
LaTeX Font Info: Overwriting symbol font `letters' in version `bold' 521 521 LaTeX Font Info: Overwriting symbol font `letters' in version `bold'
(Font) OML/txmi/m/it --> OML/txmi/bx/it on input line 59. 522 522 (Font) OML/txmi/m/it --> OML/txmi/bx/it on input line 59.
\symlettersA=\mathgroup5 523 523 \symlettersA=\mathgroup5
LaTeX Font Info: Overwriting symbol font `lettersA' in version `bold' 524 524 LaTeX Font Info: Overwriting symbol font `lettersA' in version `bold'
(Font) U/txmia/m/it --> U/txmia/bx/it on input line 67. 525 525 (Font) U/txmia/m/it --> U/txmia/bx/it on input line 67.
LaTeX Font Info: Redeclaring symbol font `symbols' on input line 77. 526 526 LaTeX Font Info: Redeclaring symbol font `symbols' on input line 77.
LaTeX Font Info: Overwriting symbol font `symbols' in version `normal' 527 527 LaTeX Font Info: Overwriting symbol font `symbols' in version `normal'
(Font) OMS/cmsy/m/n --> OMS/txsy/m/n on input line 77. 528 528 (Font) OMS/cmsy/m/n --> OMS/txsy/m/n on input line 77.
LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' 529 529 LaTeX Font Info: Overwriting symbol font `symbols' in version `bold'
(Font) OMS/cmsy/b/n --> OMS/txsy/m/n on input line 77. 530 530 (Font) OMS/cmsy/b/n --> OMS/txsy/m/n on input line 77.
LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' 531 531 LaTeX Font Info: Overwriting symbol font `symbols' in version `bold'
(Font) OMS/txsy/m/n --> OMS/txsy/bx/n on input line 78. 532 532 (Font) OMS/txsy/m/n --> OMS/txsy/bx/n on input line 78.
\symAMSa=\mathgroup6 533 533 \symAMSa=\mathgroup6
LaTeX Font Info: Overwriting symbol font `AMSa' in version `bold' 534 534 LaTeX Font Info: Overwriting symbol font `AMSa' in version `bold'
(Font) U/txsya/m/n --> U/txsya/bx/n on input line 94. 535 535 (Font) U/txsya/m/n --> U/txsya/bx/n on input line 94.
\symAMSb=\mathgroup7 536 536 \symAMSb=\mathgroup7
LaTeX Font Info: Overwriting symbol font `AMSb' in version `bold' 537 537 LaTeX Font Info: Overwriting symbol font `AMSb' in version `bold'
(Font) U/txsyb/m/n --> U/txsyb/bx/n on input line 103. 538 538 (Font) U/txsyb/m/n --> U/txsyb/bx/n on input line 103.
\symsymbolsC=\mathgroup8 539 539 \symsymbolsC=\mathgroup8
LaTeX Font Info: Overwriting symbol font `symbolsC' in version `bold' 540 540 LaTeX Font Info: Overwriting symbol font `symbolsC' in version `bold'
(Font) U/txsyc/m/n --> U/txsyc/bx/n on input line 113. 541 541 (Font) U/txsyc/m/n --> U/txsyc/bx/n on input line 113.
LaTeX Font Info: Redeclaring symbol font `largesymbols' on input line 120. 542 542 LaTeX Font Info: Redeclaring symbol font `largesymbols' on input line 120.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal' 543 543 LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal'
(Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120. 544 544 (Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' 545 545 LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold'
(Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120. 546 546 (Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' 547 547 LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold'
(Font) OMX/txex/m/n --> OMX/txex/bx/n on input line 121. 548 548 (Font) OMX/txex/m/n --> OMX/txex/bx/n on input line 121.
\symlargesymbolsA=\mathgroup9 549 549 \symlargesymbolsA=\mathgroup9
LaTeX Font Info: Overwriting symbol font `largesymbolsA' in version `bold' 550 550 LaTeX Font Info: Overwriting symbol font `largesymbolsA' in version `bold'
(Font) U/txexa/m/n --> U/txexa/bx/n on input line 129. 551 551 (Font) U/txexa/m/n --> U/txexa/bx/n on input line 129.
LaTeX Font Info: Redeclaring math symbol \mathsterling on input line 164. 552 552 LaTeX Font Info: Redeclaring math symbol \mathsterling on input line 164.
LaTeX Font Info: Redeclaring math symbol \hbar on input line 591. 553 553 LaTeX Font Info: Redeclaring math symbol \hbar on input line 591.
LaTeX Info: Redefining \not on input line 1043. 554 554 LaTeX Info: Redefining \not on input line 1043.
LaTeX Info: Redefining \textsquare on input line 1063. 555 555 LaTeX Info: Redefining \textsquare on input line 1063.
LaTeX Info: Redefining \openbox on input line 1064. 556 556 LaTeX Info: Redefining \openbox on input line 1064.
) 557 557 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/relsize/relsize.sty 558 558 (/usr/local/texlive/2023/texmf-dist/tex/latex/relsize/relsize.sty
Package: relsize 2013/03/29 ver 4.1 559 559 Package: relsize 2013/03/29 ver 4.1
) 560 560 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/xkeyval/xkeyval.sty 561 561 (/usr/local/texlive/2023/texmf-dist/tex/latex/xkeyval/xkeyval.sty
Package: xkeyval 2022/06/16 v2.9 package option processing (HA) 562 562 Package: xkeyval 2022/06/16 v2.9 package option processing (HA)
563 563
(/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkeyval.tex 564 564 (/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkeyval.tex
(/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkvutils.tex 565 565 (/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkvutils.tex
\XKV@toks=\toks40 566 566 \XKV@toks=\toks40
\XKV@tempa@toks=\toks41 567 567 \XKV@tempa@toks=\toks41
) 568 568 )
\XKV@depth=\count309 569 569 \XKV@depth=\count309
File: xkeyval.tex 2014/12/03 v2.7a key=value parser (HA) 570 570 File: xkeyval.tex 2014/12/03 v2.7a key=value parser (HA)
)) 571 571 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyphenat/hyphenat.sty 572 572 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyphenat/hyphenat.sty
Package: hyphenat 2009/09/02 v2.3c hyphenation utilities 573 573 Package: hyphenat 2009/09/02 v2.3c hyphenation utilities
\langwohyphens=\language88 574 574 \langwohyphens=\language88
LaTeX Info: Redefining \_ on input line 43. 575 575 LaTeX Info: Redefining \_ on input line 43.
) 576 576 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/bbm-macros/bbm.sty 577 577 (/usr/local/texlive/2023/texmf-dist/tex/latex/bbm-macros/bbm.sty
Package: bbm 1999/03/15 V 1.2 provides fonts for set symbols - TH 578 578 Package: bbm 1999/03/15 V 1.2 provides fonts for set symbols - TH
LaTeX Font Info: Overwriting math alphabet `\mathbbm' in version `bold' 579 579 LaTeX Font Info: Overwriting math alphabet `\mathbbm' in version `bold'
(Font) U/bbm/m/n --> U/bbm/bx/n on input line 33. 580 580 (Font) U/bbm/m/n --> U/bbm/bx/n on input line 33.
LaTeX Font Info: Overwriting math alphabet `\mathbbmss' in version `bold' 581 581 LaTeX Font Info: Overwriting math alphabet `\mathbbmss' in version `bold'
(Font) U/bbmss/m/n --> U/bbmss/bx/n on input line 35. 582 582 (Font) U/bbmss/m/n --> U/bbmss/bx/n on input line 35.
) 583 583 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/environ/environ.sty 584 584 (/usr/local/texlive/2023/texmf-dist/tex/latex/environ/environ.sty
Package: environ 2014/05/04 v0.3 A new way to define environments 585 585 Package: environ 2014/05/04 v0.3 A new way to define environments
586 586
(/usr/local/texlive/2023/texmf-dist/tex/latex/trimspaces/trimspaces.sty 587 587 (/usr/local/texlive/2023/texmf-dist/tex/latex/trimspaces/trimspaces.sty
Package: trimspaces 2009/09/17 v1.1 Trim spaces around a token list 588 588 Package: trimspaces 2009/09/17 v1.1 Trim spaces around a token list
)) 589 589 ))
\c@upm@subfigure@count=\count310 590 590 \c@upm@subfigure@count=\count310
\c@upm@fmt@mtabular@columnnumber=\count311 591 591 \c@upm@fmt@mtabular@columnnumber=\count311
\c@upm@format@section@sectionlevel=\count312 592 592 \c@upm@format@section@sectionlevel=\count312
\c@upm@fmt@savedcounter=\count313 593 593 \c@upm@fmt@savedcounter=\count313
\c@@@upm@fmt@inlineenumeration=\count314 594 594 \c@@@upm@fmt@inlineenumeration=\count314
\c@@upm@fmt@enumdescription@cnt@=\count315 595 595 \c@@upm@fmt@enumdescription@cnt@=\count315
\upm@framed@minipage=\box113 596 596 \upm@framed@minipage=\box113
\upm@highlight@box@save=\box114 597 597 \upm@highlight@box@save=\box114
\c@upmdefinition=\count316 598 598 \c@upmdefinition=\count316
) 599 599 )
(./upmethodology-version.sty 600 600 (./upmethodology-version.sty
Package: upmethodology-version 2013/08/26 601 601 Package: upmethodology-version 2013/08/26
602 602
**** upmethodology-version is using French language **** 603 603 **** upmethodology-version is using French language ****
\upm@tmp@a=\count317 604 604 \upm@tmp@a=\count317
) 605 605 )
\listendskip=\skip62 606 606 \listendskip=\skip62
) 607 607 )
(./upmethodology-frontpage.sty 608 608 (./upmethodology-frontpage.sty
Package: upmethodology-frontpage 2015/06/26 609 609 Package: upmethodology-frontpage 2015/06/26
610 610
**** upmethodology-frontpage is using French language **** 611 611 **** upmethodology-frontpage is using French language ****
\upm@front@tmpa=\dimen256 612 612 \upm@front@tmpa=\dimen256
\upm@front@tmpb=\dimen257 613 613 \upm@front@tmpb=\dimen257
614 614
*** define extension value frontillustrationsize ****) 615 615 *** define extension value frontillustrationsize ****)
(./upmethodology-backpage.sty 616 616 (./upmethodology-backpage.sty
Package: upmethodology-backpage 2013/12/14 617 617 Package: upmethodology-backpage 2013/12/14
618 618
**** upmethodology-backpage is using French language ****) 619 619 **** upmethodology-backpage is using French language ****)
(/usr/local/texlive/2023/texmf-dist/tex/latex/url/url.sty 620 620 (/usr/local/texlive/2023/texmf-dist/tex/latex/url/url.sty
\Urlmuskip=\muskip17 621 621 \Urlmuskip=\muskip17
Package: url 2013/09/16 ver 3.4 Verb mode for urls, etc. 622 622 Package: url 2013/09/16 ver 3.4 Verb mode for urls, etc.
) 623 623 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hyperref.sty 624 624 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hyperref.sty
Package: hyperref 2023-05-16 v7.00y Hypertext links for LaTeX 625 625 Package: hyperref 2023-05-16 v7.00y Hypertext links for LaTeX
626 626
(/usr/local/texlive/2023/texmf-dist/tex/generic/ltxcmds/ltxcmds.sty 627 627 (/usr/local/texlive/2023/texmf-dist/tex/generic/ltxcmds/ltxcmds.sty
Package: ltxcmds 2020-05-10 v1.25 LaTeX kernel commands for general use (HO) 628 628 Package: ltxcmds 2020-05-10 v1.25 LaTeX kernel commands for general use (HO)
) 629 629 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/pdftexcmds/pdftexcmds.sty 630 630 (/usr/local/texlive/2023/texmf-dist/tex/generic/pdftexcmds/pdftexcmds.sty
Package: pdftexcmds 2020-06-27 v0.33 Utility functions of pdfTeX for LuaTeX (HO 631 631 Package: pdftexcmds 2020-06-27 v0.33 Utility functions of pdfTeX for LuaTeX (HO
) 632 632 )
633 633
(/usr/local/texlive/2023/texmf-dist/tex/generic/infwarerr/infwarerr.sty 634 634 (/usr/local/texlive/2023/texmf-dist/tex/generic/infwarerr/infwarerr.sty
Package: infwarerr 2019/12/03 v1.5 Providing info/warning/error messages (HO) 635 635 Package: infwarerr 2019/12/03 v1.5 Providing info/warning/error messages (HO)
) 636 636 )
Package pdftexcmds Info: \pdf@primitive is available. 637 637 Package pdftexcmds Info: \pdf@primitive is available.
Package pdftexcmds Info: \pdf@ifprimitive is available. 638 638 Package pdftexcmds Info: \pdf@ifprimitive is available.
Package pdftexcmds Info: \pdfdraftmode found. 639 639 Package pdftexcmds Info: \pdfdraftmode found.
) 640 640 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/kvdefinekeys/kvdefinekeys.sty 641 641 (/usr/local/texlive/2023/texmf-dist/tex/generic/kvdefinekeys/kvdefinekeys.sty
Package: kvdefinekeys 2019-12-19 v1.6 Define keys (HO) 642 642 Package: kvdefinekeys 2019-12-19 v1.6 Define keys (HO)
) 643 643 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/pdfescape/pdfescape.sty 644 644 (/usr/local/texlive/2023/texmf-dist/tex/generic/pdfescape/pdfescape.sty
Package: pdfescape 2019/12/09 v1.15 Implements pdfTeX's escape features (HO) 645 645 Package: pdfescape 2019/12/09 v1.15 Implements pdfTeX's escape features (HO)
) 646 646 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/hycolor/hycolor.sty 647 647 (/usr/local/texlive/2023/texmf-dist/tex/latex/hycolor/hycolor.sty
Package: hycolor 2020-01-27 v1.10 Color options for hyperref/bookmark (HO) 648 648 Package: hycolor 2020-01-27 v1.10 Color options for hyperref/bookmark (HO)
) 649 649 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/letltxmacro/letltxmacro.sty 650 650 (/usr/local/texlive/2023/texmf-dist/tex/latex/letltxmacro/letltxmacro.sty
Package: letltxmacro 2019/12/03 v1.6 Let assignment for LaTeX macros (HO) 651 651 Package: letltxmacro 2019/12/03 v1.6 Let assignment for LaTeX macros (HO)
) 652 652 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/auxhook/auxhook.sty 653 653 (/usr/local/texlive/2023/texmf-dist/tex/latex/auxhook/auxhook.sty
Package: auxhook 2019-12-17 v1.6 Hooks for auxiliary files (HO) 654 654 Package: auxhook 2019-12-17 v1.6 Hooks for auxiliary files (HO)
) 655 655 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/nameref.sty 656 656 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/nameref.sty
Package: nameref 2023-05-16 v2.51 Cross-referencing by name of section 657 657 Package: nameref 2023-05-16 v2.51 Cross-referencing by name of section
658 658
(/usr/local/texlive/2023/texmf-dist/tex/latex/refcount/refcount.sty 659 659 (/usr/local/texlive/2023/texmf-dist/tex/latex/refcount/refcount.sty
Package: refcount 2019/12/15 v3.6 Data extraction from label references (HO) 660 660 Package: refcount 2019/12/15 v3.6 Data extraction from label references (HO)
) 661 661 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/gettitlestring/gettitlestring.s 662 662 (/usr/local/texlive/2023/texmf-dist/tex/generic/gettitlestring/gettitlestring.s
ty 663 663 ty
Package: gettitlestring 2019/12/15 v1.6 Cleanup title references (HO) 664 664 Package: gettitlestring 2019/12/15 v1.6 Cleanup title references (HO)
(/usr/local/texlive/2023/texmf-dist/tex/latex/kvoptions/kvoptions.sty 665 665 (/usr/local/texlive/2023/texmf-dist/tex/latex/kvoptions/kvoptions.sty
Package: kvoptions 2022-06-15 v3.15 Key value format for package options (HO) 666 666 Package: kvoptions 2022-06-15 v3.15 Key value format for package options (HO)
)) 667 667 ))
\c@section@level=\count318 668 668 \c@section@level=\count318
) 669 669 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/etoolbox/etoolbox.sty 670 670 (/usr/local/texlive/2023/texmf-dist/tex/latex/etoolbox/etoolbox.sty
Package: etoolbox 2020/10/05 v2.5k e-TeX tools for LaTeX (JAW) 671 671 Package: etoolbox 2020/10/05 v2.5k e-TeX tools for LaTeX (JAW)
\etb@tempcnta=\count319 672 672 \etb@tempcnta=\count319
) 673 673 )
\@linkdim=\dimen258 674 674 \@linkdim=\dimen258
\Hy@linkcounter=\count320 675 675 \Hy@linkcounter=\count320
\Hy@pagecounter=\count321 676 676 \Hy@pagecounter=\count321
677 677
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/pd1enc.def 678 678 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/pd1enc.def
File: pd1enc.def 2023-05-16 v7.00y Hyperref: PDFDocEncoding definition (HO) 679 679 File: pd1enc.def 2023-05-16 v7.00y Hyperref: PDFDocEncoding definition (HO)
Now handling font encoding PD1 ... 680 680 Now handling font encoding PD1 ...
... no UTF-8 mapping file for font encoding PD1 681 681 ... no UTF-8 mapping file for font encoding PD1
) 682 682 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/intcalc/intcalc.sty 683 683 (/usr/local/texlive/2023/texmf-dist/tex/generic/intcalc/intcalc.sty
Package: intcalc 2019/12/15 v1.3 Expandable calculations with integers (HO) 684 684 Package: intcalc 2019/12/15 v1.3 Expandable calculations with integers (HO)
) 685 685 )
\Hy@SavedSpaceFactor=\count322 686 686 \Hy@SavedSpaceFactor=\count322
687 687
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/puenc.def 688 688 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/puenc.def
File: puenc.def 2023-05-16 v7.00y Hyperref: PDF Unicode definition (HO) 689 689 File: puenc.def 2023-05-16 v7.00y Hyperref: PDF Unicode definition (HO)
Now handling font encoding PU ... 690 690 Now handling font encoding PU ...
... no UTF-8 mapping file for font encoding PU 691 691 ... no UTF-8 mapping file for font encoding PU
) 692 692 )
Package hyperref Info: Option `breaklinks' set `true' on input line 4050. 693 693 Package hyperref Info: Option `breaklinks' set `true' on input line 4050.
Package hyperref Info: Option `pageanchor' set `true' on input line 4050. 694 694 Package hyperref Info: Option `pageanchor' set `true' on input line 4050.
Package hyperref Info: Option `bookmarks' set `false' on input line 4050. 695 695 Package hyperref Info: Option `bookmarks' set `false' on input line 4050.
Package hyperref Info: Option `hyperfigures' set `true' on input line 4050. 696 696 Package hyperref Info: Option `hyperfigures' set `true' on input line 4050.
Package hyperref Info: Option `hyperindex' set `true' on input line 4050. 697 697 Package hyperref Info: Option `hyperindex' set `true' on input line 4050.
Package hyperref Info: Option `linktocpage' set `true' on input line 4050. 698 698 Package hyperref Info: Option `linktocpage' set `true' on input line 4050.
Package hyperref Info: Option `bookmarks' set `true' on input line 4050. 699 699 Package hyperref Info: Option `bookmarks' set `true' on input line 4050.
Package hyperref Info: Option `bookmarksopen' set `true' on input line 4050. 700 700 Package hyperref Info: Option `bookmarksopen' set `true' on input line 4050.
Package hyperref Info: Option `bookmarksnumbered' set `true' on input line 4050 701 701 Package hyperref Info: Option `bookmarksnumbered' set `true' on input line 4050
. 702 702 .
Package hyperref Info: Option `colorlinks' set `false' on input line 4050. 703 703 Package hyperref Info: Option `colorlinks' set `false' on input line 4050.
Package hyperref Info: Hyper figures ON on input line 4165. 704 704 Package hyperref Info: Hyper figures ON on input line 4165.
Package hyperref Info: Link nesting OFF on input line 4172. 705 705 Package hyperref Info: Link nesting OFF on input line 4172.
Package hyperref Info: Hyper index ON on input line 4175. 706 706 Package hyperref Info: Hyper index ON on input line 4175.
Package hyperref Info: Plain pages OFF on input line 4182. 707 707 Package hyperref Info: Plain pages OFF on input line 4182.
Package hyperref Info: Backreferencing OFF on input line 4187. 708 708 Package hyperref Info: Backreferencing OFF on input line 4187.
Package hyperref Info: Implicit mode ON; LaTeX internals redefined. 709 709 Package hyperref Info: Implicit mode ON; LaTeX internals redefined.
Package hyperref Info: Bookmarks ON on input line 4434. 710 710 Package hyperref Info: Bookmarks ON on input line 4434.
LaTeX Info: Redefining \href on input line 4683. 711 711 LaTeX Info: Redefining \href on input line 4683.
\c@Hy@tempcnt=\count323 712 712 \c@Hy@tempcnt=\count323
LaTeX Info: Redefining \url on input line 4772. 713 713 LaTeX Info: Redefining \url on input line 4772.
\XeTeXLinkMargin=\dimen259 714 714 \XeTeXLinkMargin=\dimen259
715 715
(/usr/local/texlive/2023/texmf-dist/tex/generic/bitset/bitset.sty 716 716 (/usr/local/texlive/2023/texmf-dist/tex/generic/bitset/bitset.sty
Package: bitset 2019/12/09 v1.3 Handle bit-vector datatype (HO) 717 717 Package: bitset 2019/12/09 v1.3 Handle bit-vector datatype (HO)
718 718
(/usr/local/texlive/2023/texmf-dist/tex/generic/bigintcalc/bigintcalc.sty 719 719 (/usr/local/texlive/2023/texmf-dist/tex/generic/bigintcalc/bigintcalc.sty
Package: bigintcalc 2019/12/15 v1.5 Expandable calculations on big integers (HO 720 720 Package: bigintcalc 2019/12/15 v1.5 Expandable calculations on big integers (HO
) 721 721 )
)) 722 722 ))
\Fld@menulength=\count324 723 723 \Fld@menulength=\count324
\Field@Width=\dimen260 724 724 \Field@Width=\dimen260
\Fld@charsize=\dimen261 725 725 \Fld@charsize=\dimen261
Package hyperref Info: Hyper figures ON on input line 6049. 726 726 Package hyperref Info: Hyper figures ON on input line 6049.
Package hyperref Info: Link nesting OFF on input line 6056. 727 727 Package hyperref Info: Link nesting OFF on input line 6056.
Package hyperref Info: Hyper index ON on input line 6059. 728 728 Package hyperref Info: Hyper index ON on input line 6059.
Package hyperref Info: backreferencing OFF on input line 6066. 729 729 Package hyperref Info: backreferencing OFF on input line 6066.
Package hyperref Info: Link coloring OFF on input line 6071. 730 730 Package hyperref Info: Link coloring OFF on input line 6071.
Package hyperref Info: Link coloring with OCG OFF on input line 6076. 731 731 Package hyperref Info: Link coloring with OCG OFF on input line 6076.
Package hyperref Info: PDF/A mode OFF on input line 6081. 732 732 Package hyperref Info: PDF/A mode OFF on input line 6081.
733 733
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/atbegshi-ltx.sty 734 734 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/atbegshi-ltx.sty
Package: atbegshi-ltx 2021/01/10 v1.0c Emulation of the original atbegshi 735 735 Package: atbegshi-ltx 2021/01/10 v1.0c Emulation of the original atbegshi
package with kernel methods 736 736 package with kernel methods
) 737 737 )
\Hy@abspage=\count325 738 738 \Hy@abspage=\count325
\c@Item=\count326 739 739 \c@Item=\count326
\c@Hfootnote=\count327 740 740 \c@Hfootnote=\count327
) 741 741 )
Package hyperref Info: Driver: hpdftex. 742 742 Package hyperref Info: Driver: hpdftex.
743 743
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hpdftex.def 744 744 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hpdftex.def
File: hpdftex.def 2023-05-16 v7.00y Hyperref driver for pdfTeX 745 745 File: hpdftex.def 2023-05-16 v7.00y Hyperref driver for pdfTeX
746 746
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/atveryend-ltx.sty 747 747 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/atveryend-ltx.sty
Package: atveryend-ltx 2020/08/19 v1.0a Emulation of the original atveryend pac 748 748 Package: atveryend-ltx 2020/08/19 v1.0a Emulation of the original atveryend pac
kage 749 749 kage
with kernel methods 750 750 with kernel methods
) 751 751 )
\Fld@listcount=\count328 752 752 \Fld@listcount=\count328
\c@bookmark@seq@number=\count329 753 753 \c@bookmark@seq@number=\count329
754 754
(/usr/local/texlive/2023/texmf-dist/tex/latex/rerunfilecheck/rerunfilecheck.sty 755 755 (/usr/local/texlive/2023/texmf-dist/tex/latex/rerunfilecheck/rerunfilecheck.sty
Package: rerunfilecheck 2022-07-10 v1.10 Rerun checks for auxiliary files (HO) 756 756 Package: rerunfilecheck 2022-07-10 v1.10 Rerun checks for auxiliary files (HO)
757 757
(/usr/local/texlive/2023/texmf-dist/tex/generic/uniquecounter/uniquecounter.sty 758 758 (/usr/local/texlive/2023/texmf-dist/tex/generic/uniquecounter/uniquecounter.sty
Package: uniquecounter 2019/12/15 v1.4 Provide unlimited unique counter (HO) 759 759 Package: uniquecounter 2019/12/15 v1.4 Provide unlimited unique counter (HO)
) 760 760 )
Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2 761 761 Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2
85. 762 762 85.
) 763 763 )
\Hy@SectionHShift=\skip63 764 764 \Hy@SectionHShift=\skip63
) 765 765 )
\upm@smalllogo@height=\dimen262 766 766 \upm@smalllogo@height=\dimen262
) (./spimbasephdthesis.sty 767 767 ) (./spimbasephdthesis.sty
Package: spimbasephdthesis 2015/09/01 768 768 Package: spimbasephdthesis 2015/09/01
769 769
(/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.sty 770 770 (/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.sty
File: lettrine.sty 2023-04-18 v2.40 (Daniel Flipo) 771 771 File: lettrine.sty 2023-04-18 v2.40 (Daniel Flipo)
772 772
(/usr/local/texlive/2023/texmf-dist/tex/latex/l3packages/xfp/xfp.sty 773 773 (/usr/local/texlive/2023/texmf-dist/tex/latex/l3packages/xfp/xfp.sty
(/usr/local/texlive/2023/texmf-dist/tex/latex/l3kernel/expl3.sty 774 774 (/usr/local/texlive/2023/texmf-dist/tex/latex/l3kernel/expl3.sty
Package: expl3 2023-05-22 L3 programming layer (loader) 775 775 Package: expl3 2023-05-22 L3 programming layer (loader)
776 776
(/usr/local/texlive/2023/texmf-dist/tex/latex/l3backend/l3backend-pdftex.def 777 777 (/usr/local/texlive/2023/texmf-dist/tex/latex/l3backend/l3backend-pdftex.def
File: l3backend-pdftex.def 2023-04-19 L3 backend support: PDF output (pdfTeX) 778 778 File: l3backend-pdftex.def 2023-04-19 L3 backend support: PDF output (pdfTeX)
\l__color_backend_stack_int=\count330 779 779 \l__color_backend_stack_int=\count330
\l__pdf_internal_box=\box115 780 780 \l__pdf_internal_box=\box115
)) 781 781 ))
Package: xfp 2023-02-02 L3 Floating point unit 782 782 Package: xfp 2023-02-02 L3 Floating point unit
) 783 783 )
\c@DefaultLines=\count331 784 784 \c@DefaultLines=\count331
\c@DefaultDepth=\count332 785 785 \c@DefaultDepth=\count332
\DefaultFindent=\dimen263 786 786 \DefaultFindent=\dimen263
\DefaultNindent=\dimen264 787 787 \DefaultNindent=\dimen264
\DefaultSlope=\dimen265 788 788 \DefaultSlope=\dimen265
\DiscardVskip=\dimen266 789 789 \DiscardVskip=\dimen266
\L@lbox=\box116 790 790 \L@lbox=\box116
\L@tbox=\box117 791 791 \L@tbox=\box117
\c@L@lines=\count333 792 792 \c@L@lines=\count333
\c@L@depth=\count334 793 793 \c@L@depth=\count334
\L@Pindent=\dimen267 794 794 \L@Pindent=\dimen267
\L@Findent=\dimen268 795 795 \L@Findent=\dimen268
\L@Nindent=\dimen269 796 796 \L@Nindent=\dimen269
\L@lraise=\dimen270 797 797 \L@lraise=\dimen270
\L@first=\dimen271 798 798 \L@first=\dimen271
\L@next=\dimen272 799 799 \L@next=\dimen272
\L@slope=\dimen273 800 800 \L@slope=\dimen273
\L@height=\dimen274 801 801 \L@height=\dimen274
\L@novskip=\dimen275 802 802 \L@novskip=\dimen275
\L@target@ht=\dimen276 803 803 \L@target@ht=\dimen276
\L@target@dp=\dimen277 804 804 \L@target@dp=\dimen277
\L@target@tht=\dimen278 805 805 \L@target@tht=\dimen278
\LettrineWidth=\dimen279 806 806 \LettrineWidth=\dimen279
\LettrineHeight=\dimen280 807 807 \LettrineHeight=\dimen280
\LettrineDepth=\dimen281 808 808 \LettrineDepth=\dimen281
Loading lettrine.cfg 809 809 Loading lettrine.cfg
(/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.cfg) 810 810 (/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.cfg)
\Llist@everypar=\toks42 811 811 \Llist@everypar=\toks42
) 812 812 )
*** define extension value backcovermessage ****) 813 813 *** define extension value backcovermessage ****)
**** including upm extension spimufcphdthesis (upmext-spimufcphdthesis.cfg) *** 814 814 **** including upm extension spimufcphdthesis (upmext-spimufcphdthesis.cfg) ***
* (./upmext-spimufcphdthesis.cfg *** define extension value copyright **** 815 815 * (./upmext-spimufcphdthesis.cfg *** define extension value copyright ****
*** style extension spimufcphdthesis, Copyright {(c)} 2012--14 Dr. St\unhbox \v 816 816 *** style extension spimufcphdthesis, Copyright {(c)} 2012--14 Dr. St\unhbox \v
oidb@x \bgroup \let \unhbox \voidb@x \setbox \@tempboxa \hbox {e\global \mathch 817 817 oidb@x \bgroup \let \unhbox \voidb@x \setbox \@tempboxa \hbox {e\global \mathch
ardef \accent@spacefactor \spacefactor }\let \begingroup \let \typeout \protect 818 818 ardef \accent@spacefactor \spacefactor }\let \begingroup \let \typeout \protect
\begingroup \def \MessageBreak { 819 819 \begingroup \def \MessageBreak {
(Font) }\let \protect \immediate\write \m@ne {LaTeX Font Info: 820 820 (Font) }\let \protect \immediate\write \m@ne {LaTeX Font Info:
on input line 5.}\endgroup \endgroup \relax \let \ignorespaces \relax \accent 821 821 on input line 5.}\endgroup \endgroup \relax \let \ignorespaces \relax \accent
19 e\egroup \spacefactor \accent@spacefactor phane GALLAND. **** 822 822 19 e\egroup \spacefactor \accent@spacefactor phane GALLAND. ****
*** define extension value trademarks **** 823 823 *** define extension value trademarks ****
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/helvet.sty 824 824 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/helvet.sty
Package: helvet 2020/03/25 PSNFSS-v9.3 (WaS) 825 825 Package: helvet 2020/03/25 PSNFSS-v9.3 (WaS)
) 826 826 )
*** define extension value frontillustration **** 827 827 *** define extension value frontillustration ****
*** define extension value p3illustration **** 828 828 *** define extension value p3illustration ****
*** define extension value backillustration **** 829 829 *** define extension value backillustration ****
*** define extension value watermarksize **** 830 830 *** define extension value watermarksize ****
*** define extension value universityname **** 831 831 *** define extension value universityname ****
*** define extension value speciality **** 832 832 *** define extension value speciality ****
*** define extension value defensedate **** 833 833 *** define extension value defensedate ****
*** define extension value jurytabwidth **** 834 834 *** define extension value jurytabwidth ****
*** define extension value jurystyle **** 835 835 *** define extension value jurystyle ****
*** define extension value defensemessage ****)) 836 836 *** define extension value defensemessage ****))
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/inputenc.sty 837 837 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/inputenc.sty
Package: inputenc 2021/02/14 v1.3d Input encoding file 838 838 Package: inputenc 2021/02/14 v1.3d Input encoding file
\inpenc@prehook=\toks43 839 839 \inpenc@prehook=\toks43
\inpenc@posthook=\toks44 840 840 \inpenc@posthook=\toks44
) 841 841 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/fontenc.sty 842 842 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/fontenc.sty
Package: fontenc 2021/04/29 v2.0v Standard LaTeX package 843 843 Package: fontenc 2021/04/29 v2.0v Standard LaTeX package
LaTeX Font Info: Trying to load font information for T1+phv on input line 11 844 844 LaTeX Font Info: Trying to load font information for T1+phv on input line 11
2. 845 845 2.
846 846
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/t1phv.fd 847 847 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/t1phv.fd
File: t1phv.fd 2020/03/25 scalable font definitions for T1/phv. 848 848 File: t1phv.fd 2020/03/25 scalable font definitions for T1/phv.
)) 849 849 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/times.sty 850 850 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/times.sty
Package: times 2020/03/25 PSNFSS-v9.3 (SPQR) 851 851 Package: times 2020/03/25 PSNFSS-v9.3 (SPQR)
) 852 852 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjustbox.sty 853 853 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjustbox.sty
Package: adjustbox 2022/10/17 v1.3a Adjusting TeX boxes (trim, clip, ...) 854 854 Package: adjustbox 2022/10/17 v1.3a Adjusting TeX boxes (trim, clip, ...)
855 855
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjcalc.sty 856 856 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjcalc.sty
Package: adjcalc 2012/05/16 v1.1 Provides advanced setlength with multiple back 857 857 Package: adjcalc 2012/05/16 v1.1 Provides advanced setlength with multiple back
-ends (calc, etex, pgfmath) 858 858 -ends (calc, etex, pgfmath)
) 859 859 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/trimclip.sty 860 860 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/trimclip.sty
Package: trimclip 2020/08/19 v1.2 Trim and clip general TeX material 861 861 Package: trimclip 2020/08/19 v1.2 Trim and clip general TeX material
862 862
(/usr/local/texlive/2023/texmf-dist/tex/latex/collectbox/collectbox.sty 863 863 (/usr/local/texlive/2023/texmf-dist/tex/latex/collectbox/collectbox.sty
Package: collectbox 2022/10/17 v0.4c Collect macro arguments as boxes 864 864 Package: collectbox 2022/10/17 v0.4c Collect macro arguments as boxes
\collectedbox=\box118 865 865 \collectedbox=\box118
) 866 866 )
\tc@llx=\dimen282 867 867 \tc@llx=\dimen282
\tc@lly=\dimen283 868 868 \tc@lly=\dimen283
\tc@urx=\dimen284 869 869 \tc@urx=\dimen284
\tc@ury=\dimen285 870 870 \tc@ury=\dimen285
Package trimclip Info: Using driver 'tc-pdftex.def'. 871 871 Package trimclip Info: Using driver 'tc-pdftex.def'.
872 872
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/tc-pdftex.def 873 873 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/tc-pdftex.def
File: tc-pdftex.def 2019/01/04 v2.2 Clipping driver for pdftex 874 874 File: tc-pdftex.def 2019/01/04 v2.2 Clipping driver for pdftex
)) 875 875 ))
\adjbox@Width=\dimen286 876 876 \adjbox@Width=\dimen286
\adjbox@Height=\dimen287 877 877 \adjbox@Height=\dimen287
\adjbox@Depth=\dimen288 878 878 \adjbox@Depth=\dimen288
\adjbox@Totalheight=\dimen289 879 879 \adjbox@Totalheight=\dimen289
\adjbox@pwidth=\dimen290 880 880 \adjbox@pwidth=\dimen290
\adjbox@pheight=\dimen291 881 881 \adjbox@pheight=\dimen291
\adjbox@pdepth=\dimen292 882 882 \adjbox@pdepth=\dimen292
\adjbox@ptotalheight=\dimen293 883 883 \adjbox@ptotalheight=\dimen293
884 884
(/usr/local/texlive/2023/texmf-dist/tex/latex/ifoddpage/ifoddpage.sty 885 885 (/usr/local/texlive/2023/texmf-dist/tex/latex/ifoddpage/ifoddpage.sty
Package: ifoddpage 2022/10/18 v1.2 Conditionals for odd/even page detection 886 886 Package: ifoddpage 2022/10/18 v1.2 Conditionals for odd/even page detection
\c@checkoddpage=\count335 887 887 \c@checkoddpage=\count335
) 888 888 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/varwidth/varwidth.sty 889 889 (/usr/local/texlive/2023/texmf-dist/tex/latex/varwidth/varwidth.sty
Package: varwidth 2009/03/30 ver 0.92; Variable-width minipages 890 890 Package: varwidth 2009/03/30 ver 0.92; Variable-width minipages
\@vwid@box=\box119 891 891 \@vwid@box=\box119
\sift@deathcycles=\count336 892 892 \sift@deathcycles=\count336
\@vwid@loff=\dimen294 893 893 \@vwid@loff=\dimen294
\@vwid@roff=\dimen295 894 894 \@vwid@roff=\dimen295
)) 895 895 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/algorithms/algorithm.sty 896 896 (/usr/local/texlive/2023/texmf-dist/tex/latex/algorithms/algorithm.sty
Package: algorithm 2009/08/24 v0.1 Document Style `algorithm' - floating enviro 897 897 Package: algorithm 2009/08/24 v0.1 Document Style `algorithm' - floating enviro
nment 898 898 nment
899 899
(/usr/local/texlive/2023/texmf-dist/tex/latex/float/float.sty 900 900 (/usr/local/texlive/2023/texmf-dist/tex/latex/float/float.sty
Package: float 2001/11/08 v1.3d Float enhancements (AL) 901 901 Package: float 2001/11/08 v1.3d Float enhancements (AL)
\c@float@type=\count337 902 902 \c@float@type=\count337
\float@exts=\toks45 903 903 \float@exts=\toks45
\float@box=\box120 904 904 \float@box=\box120
\@float@everytoks=\toks46 905 905 \@float@everytoks=\toks46
\@floatcapt=\box121 906 906 \@floatcapt=\box121
) 907 907 )
\@float@every@algorithm=\toks47 908 908 \@float@every@algorithm=\toks47
\c@algorithm=\count338 909 909 \c@algorithm=\count338
) 910 910 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algpseudocode.sty 911 911 (/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algpseudocode.sty
Package: algpseudocode 912 912 Package: algpseudocode
913 913
(/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algorithmicx.sty 914 914 (/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algorithmicx.sty
Package: algorithmicx 2005/04/27 v1.2 Algorithmicx 915 915 Package: algorithmicx 2005/04/27 v1.2 Algorithmicx
916 916
Document Style algorithmicx 1.2 - a greatly improved `algorithmic' style 917 917 Document Style algorithmicx 1.2 - a greatly improved `algorithmic' style
\c@ALG@line=\count339 918 918 \c@ALG@line=\count339
\c@ALG@rem=\count340 919 919 \c@ALG@rem=\count340
\c@ALG@nested=\count341 920 920 \c@ALG@nested=\count341
\ALG@tlm=\skip64 921 921 \ALG@tlm=\skip64
\ALG@thistlm=\skip65 922 922 \ALG@thistlm=\skip65
\c@ALG@Lnr=\count342 923 923 \c@ALG@Lnr=\count342
\c@ALG@blocknr=\count343 924 924 \c@ALG@blocknr=\count343
\c@ALG@storecount=\count344 925 925 \c@ALG@storecount=\count344
\c@ALG@tmpcounter=\count345 926 926 \c@ALG@tmpcounter=\count345
\ALG@tmplength=\skip66 927 927 \ALG@tmplength=\skip66
) 928 928 )
Document Style - pseudocode environments for use with the `algorithmicx' style 929 929 Document Style - pseudocode environments for use with the `algorithmicx' style
) *** define extension value defensedate **** 930 930 ) *** define extension value defensedate ****
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/layout.sty 931 931 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/layout.sty
Package: layout 2021-03-10 v1.2e Show layout parameters 932 932 Package: layout 2021-03-10 v1.2e Show layout parameters
\oneinch=\count346 933 933 \oneinch=\count346
\cnt@paperwidth=\count347 934 934 \cnt@paperwidth=\count347
\cnt@paperheight=\count348 935 935 \cnt@paperheight=\count348
\cnt@hoffset=\count349 936 936 \cnt@hoffset=\count349
\cnt@voffset=\count350 937 937 \cnt@voffset=\count350
\cnt@textheight=\count351 938 938 \cnt@textheight=\count351
\cnt@textwidth=\count352 939 939 \cnt@textwidth=\count352
\cnt@topmargin=\count353 940 940 \cnt@topmargin=\count353
\cnt@oddsidemargin=\count354 941 941 \cnt@oddsidemargin=\count354
\cnt@evensidemargin=\count355 942 942 \cnt@evensidemargin=\count355
\cnt@headheight=\count356 943 943 \cnt@headheight=\count356
\cnt@headsep=\count357 944 944 \cnt@headsep=\count357
\cnt@marginparsep=\count358 945 945 \cnt@marginparsep=\count358
\cnt@marginparwidth=\count359 946 946 \cnt@marginparwidth=\count359
\cnt@marginparpush=\count360 947 947 \cnt@marginparpush=\count360
\cnt@footskip=\count361 948 948 \cnt@footskip=\count361
\fheight=\count362 949 949 \fheight=\count362
\ref@top=\count363 950 950 \ref@top=\count363
\ref@hoffset=\count364 951 951 \ref@hoffset=\count364
\ref@voffset=\count365 952 952 \ref@voffset=\count365
\ref@head=\count366 953 953 \ref@head=\count366
\ref@body=\count367 954 954 \ref@body=\count367
\ref@foot=\count368 955 955 \ref@foot=\count368
\ref@margin=\count369 956 956 \ref@margin=\count369
\ref@marginwidth=\count370 957 957 \ref@marginwidth=\count370
\ref@marginpar=\count371 958 958 \ref@marginpar=\count371
\Interval=\count372 959 959 \Interval=\count372
\ExtraYPos=\count373 960 960 \ExtraYPos=\count373
\PositionX=\count374 961 961 \PositionX=\count374
\PositionY=\count375 962 962 \PositionY=\count375
\ArrowLength=\count376 963 963 \ArrowLength=\count376
) 964 964 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/geometry/geometry.sty 965 965 (/usr/local/texlive/2023/texmf-dist/tex/latex/geometry/geometry.sty
Package: geometry 2020/01/02 v5.9 Page Geometry 966 966 Package: geometry 2020/01/02 v5.9 Page Geometry
967 967
(/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifvtex.sty 968 968 (/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifvtex.sty
Package: ifvtex 2019/10/25 v1.7 ifvtex legacy package. Use iftex instead. 969 969 Package: ifvtex 2019/10/25 v1.7 ifvtex legacy package. Use iftex instead.
) 970 970 )
\Gm@cnth=\count377 971 971 \Gm@cnth=\count377
\Gm@cntv=\count378 972 972 \Gm@cntv=\count378
\c@Gm@tempcnt=\count379 973 973 \c@Gm@tempcnt=\count379
\Gm@bindingoffset=\dimen296 974 974 \Gm@bindingoffset=\dimen296
\Gm@wd@mp=\dimen297 975 975 \Gm@wd@mp=\dimen297
\Gm@odd@mp=\dimen298 976 976 \Gm@odd@mp=\dimen298
\Gm@even@mp=\dimen299 977 977 \Gm@even@mp=\dimen299
\Gm@layoutwidth=\dimen300 978 978 \Gm@layoutwidth=\dimen300
\Gm@layoutheight=\dimen301 979 979 \Gm@layoutheight=\dimen301
\Gm@layouthoffset=\dimen302 980 980 \Gm@layouthoffset=\dimen302
\Gm@layoutvoffset=\dimen303 981 981 \Gm@layoutvoffset=\dimen303
\Gm@dimlist=\toks48 982 982 \Gm@dimlist=\toks48
) (./main.aux 983 983 ) (./main.aux
(./chapters/contexte2.aux) (./chapters/EIAH.aux) (./chapters/CBR.aux) 984 984 (./chapters/contexte2.aux) (./chapters/EIAH.aux) (./chapters/CBR.aux)
(./chapters/Architecture.aux) (./chapters/ESCBR.aux) (./chapters/TS.aux 985 985 (./chapters/Architecture.aux) (./chapters/ESCBR.aux) (./chapters/TS.aux
986 986
LaTeX Warning: Label `eqBeta' multiply defined. 987 987 LaTeX Warning: Label `eqBeta' multiply defined.
988 988
989 989
LaTeX Warning: Label `fig:Amodel' multiply defined. 990 990 LaTeX Warning: Label `fig:Amodel' multiply defined.
991 991
992 992
LaTeX Warning: Label `fig:stabilityBP' multiply defined. 993 993 LaTeX Warning: Label `fig:stabilityBP' multiply defined.
994 994
) (./chapters/Conclusions.aux) (./chapters/Publications.aux)) 995 995 ) (./chapters/Conclusions.aux) (./chapters/Publications.aux))
\openout1 = `main.aux'. 996 996 \openout1 = `main.aux'.
997 997
LaTeX Font Info: Checking defaults for OML/txmi/m/it on input line 231. 998 998 LaTeX Font Info: Checking defaults for OML/txmi/m/it on input line 231.
LaTeX Font Info: Trying to load font information for OML+txmi on input line 999 999 LaTeX Font Info: Trying to load font information for OML+txmi on input line
231. 1000 1000 231.
1001 1001
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omltxmi.fd 1002 1002 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omltxmi.fd
File: omltxmi.fd 2000/12/15 v3.1 1003 1003 File: omltxmi.fd 2000/12/15 v3.1
) 1004 1004 )
LaTeX Font Info: ... okay on input line 231. 1005 1005 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for OMS/txsy/m/n on input line 231. 1006 1006 LaTeX Font Info: Checking defaults for OMS/txsy/m/n on input line 231.
LaTeX Font Info: Trying to load font information for OMS+txsy on input line 1007 1007 LaTeX Font Info: Trying to load font information for OMS+txsy on input line
231. 1008 1008 231.
1009 1009
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omstxsy.fd 1010 1010 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omstxsy.fd
File: omstxsy.fd 2000/12/15 v3.1 1011 1011 File: omstxsy.fd 2000/12/15 v3.1
) 1012 1012 )
LaTeX Font Info: ... okay on input line 231. 1013 1013 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 231. 1014 1014 LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1015 1015 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 231. 1016 1016 LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1017 1017 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 231. 1018 1018 LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1019 1019 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for OMX/txex/m/n on input line 231. 1020 1020 LaTeX Font Info: Checking defaults for OMX/txex/m/n on input line 231.
LaTeX Font Info: Trying to load font information for OMX+txex on input line 1021 1021 LaTeX Font Info: Trying to load font information for OMX+txex on input line
231. 1022 1022 231.
1023 1023
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omxtxex.fd 1024 1024 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omxtxex.fd
File: omxtxex.fd 2000/12/15 v3.1 1025 1025 File: omxtxex.fd 2000/12/15 v3.1
) 1026 1026 )
LaTeX Font Info: ... okay on input line 231. 1027 1027 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for U/txexa/m/n on input line 231. 1028 1028 LaTeX Font Info: Checking defaults for U/txexa/m/n on input line 231.
LaTeX Font Info: Trying to load font information for U+txexa on input line 2 1029 1029 LaTeX Font Info: Trying to load font information for U+txexa on input line 2
31. 1030 1030 31.
1031 1031
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxexa.fd 1032 1032 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxexa.fd
File: utxexa.fd 2000/12/15 v3.1 1033 1033 File: utxexa.fd 2000/12/15 v3.1
) 1034 1034 )
LaTeX Font Info: ... okay on input line 231. 1035 1035 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 231. 1036 1036 LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1037 1037 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 231. 1038 1038 LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1039 1039 LaTeX Font Info: ... okay on input line 231.
1040 1040
(/usr/local/texlive/2023/texmf-dist/tex/context/base/mkii/supp-pdf.mkii 1041 1041 (/usr/local/texlive/2023/texmf-dist/tex/context/base/mkii/supp-pdf.mkii
[Loading MPS to PDF converter (version 2006.09.02).] 1042 1042 [Loading MPS to PDF converter (version 2006.09.02).]
\scratchcounter=\count380 1043 1043 \scratchcounter=\count380
\scratchdimen=\dimen304 1044 1044 \scratchdimen=\dimen304
\scratchbox=\box122 1045 1045 \scratchbox=\box122
\nofMPsegments=\count381 1046 1046 \nofMPsegments=\count381
\nofMParguments=\count382 1047 1047 \nofMParguments=\count382
\everyMPshowfont=\toks49 1048 1048 \everyMPshowfont=\toks49
\MPscratchCnt=\count383 1049 1049 \MPscratchCnt=\count383
\MPscratchDim=\dimen305 1050 1050 \MPscratchDim=\dimen305
\MPnumerator=\count384 1051 1051 \MPnumerator=\count384
\makeMPintoPDFobject=\count385 1052 1052 \makeMPintoPDFobject=\count385
\everyMPtoPDFconversion=\toks50 1053 1053 \everyMPtoPDFconversion=\toks50
) (/usr/local/texlive/2023/texmf-dist/tex/latex/epstopdf-pkg/epstopdf-base.sty 1054 1054 ) (/usr/local/texlive/2023/texmf-dist/tex/latex/epstopdf-pkg/epstopdf-base.sty
Package: epstopdf-base 2020-01-24 v2.11 Base part for package epstopdf 1055 1055 Package: epstopdf-base 2020-01-24 v2.11 Base part for package epstopdf
Package epstopdf-base Info: Redefining graphics rule for `.eps' on input line 4 1056 1056 Package epstopdf-base Info: Redefining graphics rule for `.eps' on input line 4
85. 1057 1057 85.
1058 1058
(/usr/local/texlive/2023/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg 1059 1059 (/usr/local/texlive/2023/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg
File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv 1060 1060 File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv
e 1061 1061 e
)) 1062 1062 ))
LaTeX Info: Redefining \degres on input line 231. 1063 1063 LaTeX Info: Redefining \degres on input line 231.
LaTeX Info: Redefining \up on input line 231. 1064 1064 LaTeX Info: Redefining \up on input line 231.
Package caption Info: Begin \AtBeginDocument code. 1065 1065 Package caption Info: Begin \AtBeginDocument code.
Package caption Info: float package is loaded. 1066 1066 Package caption Info: float package is loaded.
Package caption Info: hyperref package is loaded. 1067 1067 Package caption Info: hyperref package is loaded.
Package caption Info: picinpar package is loaded. 1068 1068 Package caption Info: picinpar package is loaded.
Package caption Info: End \AtBeginDocument code. 1069 1069 Package caption Info: End \AtBeginDocument code.
1070 1070
*** Overriding the 'enumerate' environment. Pass option 'standardlists' for avo 1071 1071 *** Overriding the 'enumerate' environment. Pass option 'standardlists' for avo
iding this override. 1072 1072 iding this override.
*** Overriding the 'description' environment. Pass option 'standardlists' for a 1073 1073 *** Overriding the 'description' environment. Pass option 'standardlists' for a
voiding this override. ************ USE CUSTOM FRONT COVER 1074 1074 voiding this override. ************ USE CUSTOM FRONT COVER
Package hyperref Info: Link coloring OFF on input line 231. 1075 1075 Package hyperref Info: Link coloring OFF on input line 231.
(./main.out) 1076 1076 (./main.out)
(./main.out) 1077 1077 (./main.out)
\@outlinefile=\write3 1078 1078 \@outlinefile=\write3
\openout3 = `main.out'. 1079 1079 \openout3 = `main.out'.
1080 1080
1081 1081
*geometry* driver: auto-detecting 1082 1082 *geometry* driver: auto-detecting
*geometry* detected driver: pdftex 1083 1083 *geometry* detected driver: pdftex
*geometry* verbose mode - [ preamble ] result: 1084 1084 *geometry* verbose mode - [ preamble ] result:
* pass: disregarded the geometry package! 1085 1085 * pass: disregarded the geometry package!
* \paperwidth=598.14806pt 1086 1086 * \paperwidth=598.14806pt
* \paperheight=845.90042pt 1087 1087 * \paperheight=845.90042pt
* \textwidth=427.43153pt 1088 1088 * \textwidth=427.43153pt
* \textheight=671.71976pt 1089 1089 * \textheight=671.71976pt
* \oddsidemargin=99.58464pt 1090 1090 * \oddsidemargin=99.58464pt
* \evensidemargin=71.13188pt 1091 1091 * \evensidemargin=71.13188pt
* \topmargin=56.9055pt 1092 1092 * \topmargin=56.9055pt
* \headheight=12.0pt 1093 1093 * \headheight=12.0pt
* \headsep=31.29802pt 1094 1094 * \headsep=31.29802pt
* \topskip=11.0pt 1095 1095 * \topskip=11.0pt
* \footskip=31.29802pt 1096 1096 * \footskip=31.29802pt
* \marginparwidth=54.2025pt 1097 1097 * \marginparwidth=54.2025pt
* \marginparsep=7.0pt 1098 1098 * \marginparsep=7.0pt
* \columnsep=10.0pt 1099 1099 * \columnsep=10.0pt
* \skip\footins=10.0pt plus 4.0pt minus 2.0pt 1100 1100 * \skip\footins=10.0pt plus 4.0pt minus 2.0pt
* \hoffset=-72.26999pt 1101 1101 * \hoffset=-72.26999pt
* \voffset=-72.26999pt 1102 1102 * \voffset=-72.26999pt
* \mag=1000 1103 1103 * \mag=1000
* \@twocolumnfalse 1104 1104 * \@twocolumnfalse
* \@twosidetrue 1105 1105 * \@twosidetrue
* \@mparswitchtrue 1106 1106 * \@mparswitchtrue
* \@reversemarginfalse 1107 1107 * \@reversemarginfalse
* (1in=72.27pt=25.4mm, 1cm=28.453pt) 1108 1108 * (1in=72.27pt=25.4mm, 1cm=28.453pt)
1109 1109
*geometry* verbose mode - [ newgeometry ] result: 1110 1110 *geometry* verbose mode - [ newgeometry ] result:
* driver: pdftex 1111 1111 * driver: pdftex
* paper: a4paper 1112 1112 * paper: a4paper
* layout: <same size as paper> 1113 1113 * layout: <same size as paper>
* layoutoffset:(h,v)=(0.0pt,0.0pt) 1114 1114 * layoutoffset:(h,v)=(0.0pt,0.0pt)
* modes: twoside 1115 1115 * modes: twoside
* h-part:(L,W,R)=(170.71652pt, 355.65306pt, 71.77847pt) 1116 1116 * h-part:(L,W,R)=(170.71652pt, 355.65306pt, 71.77847pt)
* v-part:(T,H,B)=(101.50906pt, 741.54591pt, 2.84544pt) 1117 1117 * v-part:(T,H,B)=(101.50906pt, 741.54591pt, 2.84544pt)
* \paperwidth=598.14806pt 1118 1118 * \paperwidth=598.14806pt
* \paperheight=845.90042pt 1119 1119 * \paperheight=845.90042pt
* \textwidth=355.65306pt 1120 1120 * \textwidth=355.65306pt
* \textheight=741.54591pt 1121 1121 * \textheight=741.54591pt
* \oddsidemargin=98.44653pt 1122 1122 * \oddsidemargin=98.44653pt
* \evensidemargin=-0.49152pt 1123 1123 * \evensidemargin=-0.49152pt
* \topmargin=-14.05894pt 1124 1124 * \topmargin=-14.05894pt
* \headheight=12.0pt 1125 1125 * \headheight=12.0pt
* \headsep=31.29802pt 1126 1126 * \headsep=31.29802pt
* \topskip=11.0pt 1127 1127 * \topskip=11.0pt
* \footskip=31.29802pt 1128 1128 * \footskip=31.29802pt
* \marginparwidth=54.2025pt 1129 1129 * \marginparwidth=54.2025pt
* \marginparsep=7.0pt 1130 1130 * \marginparsep=7.0pt
* \columnsep=10.0pt 1131 1131 * \columnsep=10.0pt
* \skip\footins=10.0pt plus 4.0pt minus 2.0pt 1132 1132 * \skip\footins=10.0pt plus 4.0pt minus 2.0pt
* \hoffset=-72.26999pt 1133 1133 * \hoffset=-72.26999pt
* \voffset=-72.26999pt 1134 1134 * \voffset=-72.26999pt
* \mag=1000 1135 1135 * \mag=1000
* \@twocolumnfalse 1136 1136 * \@twocolumnfalse
* \@twosidetrue 1137 1137 * \@twosidetrue
* \@mparswitchtrue 1138 1138 * \@mparswitchtrue
* \@reversemarginfalse 1139 1139 * \@reversemarginfalse
* (1in=72.27pt=25.4mm, 1cm=28.453pt) 1140 1140 * (1in=72.27pt=25.4mm, 1cm=28.453pt)
1141 1141
<images_logos/image1_logoUBFC_grand.png, id=385, 610.4406pt x 217.0509pt> 1142 1142 <images_logos/image1_logoUBFC_grand.png, id=385, 610.4406pt x 217.0509pt>
File: images_logos/image1_logoUBFC_grand.png Graphic file (type png) 1143 1143 File: images_logos/image1_logoUBFC_grand.png Graphic file (type png)
<use images_logos/image1_logoUBFC_grand.png> 1144 1144 <use images_logos/image1_logoUBFC_grand.png>
Package pdftex.def Info: images_logos/image1_logoUBFC_grand.png used on input 1145 1145 Package pdftex.def Info: images_logos/image1_logoUBFC_grand.png used on input
line 237. 1146 1146 line 237.
(pdftex.def) Requested size: 142.25905pt x 50.57973pt. 1147 1147 (pdftex.def) Requested size: 142.25905pt x 50.57973pt.
<images_logos/logo_UFC_2018_transparence.png, id=387, 104.5506pt x 34.6896pt> 1148 1148 <images_logos/logo_UFC_2018_transparence.png, id=387, 104.5506pt x 34.6896pt>
File: images_logos/logo_UFC_2018_transparence.png Graphic file (type png) 1149 1149 File: images_logos/logo_UFC_2018_transparence.png Graphic file (type png)
<use images_logos/logo_UFC_2018_transparence.png> 1150 1150 <use images_logos/logo_UFC_2018_transparence.png>
Package pdftex.def Info: images_logos/logo_UFC_2018_transparence.png used on i 1151 1151 Package pdftex.def Info: images_logos/logo_UFC_2018_transparence.png used on i
nput line 237. 1152 1152 nput line 237.
(pdftex.def) Requested size: 142.25905pt x 47.20264pt. 1153 1153 (pdftex.def) Requested size: 142.25905pt x 47.20264pt.
LaTeX Font Info: Trying to load font information for OT1+txr on input line 2 1154 1154 LaTeX Font Info: Trying to load font information for OT1+txr on input line 2
48. 1155 1155 48.
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/ot1txr.fd 1156 1156 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/ot1txr.fd
File: ot1txr.fd 2000/12/15 v3.1 1157 1157 File: ot1txr.fd 2000/12/15 v3.1
) 1158 1158 )
LaTeX Font Info: Trying to load font information for U+txmia on input line 2 1159 1159 LaTeX Font Info: Trying to load font information for U+txmia on input line 2
48. 1160 1160 48.
1161 1161
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxmia.fd 1162 1162 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxmia.fd
File: utxmia.fd 2000/12/15 v3.1 1163 1163 File: utxmia.fd 2000/12/15 v3.1
) 1164 1164 )
LaTeX Font Info: Trying to load font information for U+txsya on input line 2 1165 1165 LaTeX Font Info: Trying to load font information for U+txsya on input line 2
48. 1166 1166 48.
1167 1167
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsya.fd 1168 1168 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsya.fd
File: utxsya.fd 2000/12/15 v3.1 1169 1169 File: utxsya.fd 2000/12/15 v3.1
) 1170 1170 )
LaTeX Font Info: Trying to load font information for U+txsyb on input line 2 1171 1171 LaTeX Font Info: Trying to load font information for U+txsyb on input line 2
48. 1172 1172 48.
1173 1173
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyb.fd 1174 1174 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyb.fd
File: utxsyb.fd 2000/12/15 v3.1 1175 1175 File: utxsyb.fd 2000/12/15 v3.1
) 1176 1176 )
LaTeX Font Info: Trying to load font information for U+txsyc on input line 2 1177 1177 LaTeX Font Info: Trying to load font information for U+txsyc on input line 2
48. 1178 1178 48.
1179 1179
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyc.fd 1180 1180 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyc.fd
File: utxsyc.fd 2000/12/15 v3.1 1181 1181 File: utxsyc.fd 2000/12/15 v3.1
) [1 1182 1182 ) [1
1183 1183
1184 1184
1185 1185
1186 1186
{/usr/local/texlive/2023/texmf-var/fonts/map/pdftex/updmap/pdftex.map}{/usr/loc 1187 1187 {/usr/local/texlive/2023/texmf-var/fonts/map/pdftex/updmap/pdftex.map}{/usr/loc
al/texlive/2023/texmf-dist/fonts/enc/dvips/base/8r.enc} <./images_logos/image1_ 1188 1188 al/texlive/2023/texmf-dist/fonts/enc/dvips/base/8r.enc} <./images_logos/image1_
logoUBFC_grand.png> <./images_logos/logo_UFC_2018_transparence.png>] [2 1189 1189 logoUBFC_grand.png> <./images_logos/logo_UFC_2018_transparence.png>] [2
1190 1190
1191 1191
] [3] [4] 1192 1192 ] [3] [4]
(./main.toc 1193 1193 (./main.toc
LaTeX Font Info: Font shape `T1/phv/m/it' in size <10.95> not available 1194 1194 LaTeX Font Info: Font shape `T1/phv/m/it' in size <10.95> not available
(Font) Font shape `T1/phv/m/sl' tried instead on input line 24. 1195 1195 (Font) Font shape `T1/phv/m/sl' tried instead on input line 24.
1196 1196
Underfull \vbox (badness 1043) has occurred while \output is active [] 1197 1197 Underfull \vbox (badness 1043) has occurred while \output is active []
1198 1198
[5 1199 1199 [5
1200 1200
] 1201 1201 ]
[6] [7] 1202 1202 [6] [7]
Overfull \hbox (1.29184pt too wide) detected at line 89 1203 1203 Overfull \hbox (1.29184pt too wide) detected at line 89
[][]\T1/phv/m/n/10.95 100[] 1204 1204 [][]\T1/phv/m/n/10.95 100[]
[] 1205 1205 []
1206 1206
1207 1207
Overfull \hbox (1.29184pt too wide) detected at line 90 1208 1208 Overfull \hbox (1.29184pt too wide) detected at line 90
[][]\T1/phv/m/n/10.95 100[] 1209 1209 [][]\T1/phv/m/n/10.95 100[]
[] 1210 1210 []
1211 1211
1212 1212
Overfull \hbox (1.29184pt too wide) detected at line 92 1213 1213 Overfull \hbox (1.29184pt too wide) detected at line 92
[][]\T1/phv/m/n/10.95 103[] 1214 1214 [][]\T1/phv/m/n/10.95 103[]
[] 1215 1215 []
1216 1216
1217 1217
Overfull \hbox (1.29184pt too wide) detected at line 93 1218 1218 Overfull \hbox (1.29184pt too wide) detected at line 93
[][]\T1/phv/m/n/10.95 105[] 1219 1219 [][]\T1/phv/m/n/10.95 105[]
[] 1220 1220 []
1221 1221
1222 1222
Overfull \hbox (1.29184pt too wide) detected at line 95 1223 1223 Overfull \hbox (1.29184pt too wide) detected at line 95
[][]\T1/phv/m/n/10.95 107[] 1224 1224 [][]\T1/phv/m/n/10.95 107[]
[] 1225 1225 []
1226 1226
1227 1227
Overfull \hbox (1.29184pt too wide) detected at line 96 1228 1228 Overfull \hbox (1.29184pt too wide) detected at line 96
[][]\T1/phv/m/n/10.95 108[] 1229 1229 [][]\T1/phv/m/n/10.95 108[]
[] 1230 1230 []
1231 1231
) 1232 1232 )
\tf@toc=\write4 1233 1233 \tf@toc=\write4
\openout4 = `main.toc'. 1234 1234 \openout4 = `main.toc'.
1235 1235
[8] [1 1236 1236 [8] [1
1237 1237
1238 1238
] [2] 1239 1239 ] [2]
Chapitre 1. 1240 1240 Chapitre 1.
Package lettrine.sty Info: Targeted height = 19.96736pt 1241 1241 Package lettrine.sty Info: Targeted height = 19.96736pt
(lettrine.sty) (for loversize=0, accent excluded), 1242 1242 (lettrine.sty) (for loversize=0, accent excluded),
(lettrine.sty) Lettrine height = 20.612pt (\uppercase {C}); 1243 1243 (lettrine.sty) Lettrine height = 20.612pt (\uppercase {C});
(lettrine.sty) reported on input line 340. 1244 1244 (lettrine.sty) reported on input line 340.
1245 1245
Overfull \hbox (6.79999pt too wide) in paragraph at lines 340--340 1246 1246 Overfull \hbox (6.79999pt too wide) in paragraph at lines 340--340
[][][][] 1247 1247 [][][][]
[] 1248 1248 []
1249 1249
1250 1250
Underfull \vbox (badness 10000) has occurred while \output is active [] 1251 1251 Underfull \vbox (badness 10000) has occurred while \output is active []
1252 1252
[3 1253 1253 [3
1254 1254
] 1255 1255 ]
[4] [5] [6 1256 1256 [4] [5] [6
1257 1257
] [7] [8] 1258 1258 ] [7] [8]
\openout2 = `./chapters/contexte2.aux'. 1259 1259 \openout2 = `./chapters/contexte2.aux'.
1260 1260
(./chapters/contexte2.tex 1261 1261 (./chapters/contexte2.tex
Chapitre 2. 1262 1262 Chapitre 2.
<./Figures/TLearning.png, id=566, 603.25375pt x 331.2375pt> 1263 1263 <./Figures/TLearning.png, id=566, 603.25375pt x 331.2375pt>
File: ./Figures/TLearning.png Graphic file (type png) 1264 1264 File: ./Figures/TLearning.png Graphic file (type png)
<use ./Figures/TLearning.png> 1265 1265 <use ./Figures/TLearning.png>
Package pdftex.def Info: ./Figures/TLearning.png used on input line 15. 1266 1266 Package pdftex.def Info: ./Figures/TLearning.png used on input line 15.
(pdftex.def) Requested size: 427.43153pt x 234.69505pt. 1267 1267 (pdftex.def) Requested size: 427.43153pt x 234.69505pt.
[9 1268 1268 [9
1269 1269
1270 1270
] 1271 1271 ]
<./Figures/EIAH.png, id=575, 643.40375pt x 362.35374pt> 1272 1272 <./Figures/EIAH.png, id=575, 643.40375pt x 362.35374pt>
File: ./Figures/EIAH.png Graphic file (type png) 1273 1273 File: ./Figures/EIAH.png Graphic file (type png)
<use ./Figures/EIAH.png> 1274 1274 <use ./Figures/EIAH.png>
Package pdftex.def Info: ./Figures/EIAH.png used on input line 32. 1275 1275 Package pdftex.def Info: ./Figures/EIAH.png used on input line 32.
(pdftex.def) Requested size: 427.43153pt x 240.73pt. 1276 1276 (pdftex.def) Requested size: 427.43153pt x 240.73pt.
1277 1277
1278 1278
LaTeX Warning: `!h' float specifier changed to `!ht'. 1279 1279 LaTeX Warning: `!h' float specifier changed to `!ht'.
1280 1280
[10 <./Figures/TLearning.png>] [11 <./Figures/EIAH.png>] [12] 1281 1281 [10 <./Figures/TLearning.png>] [11 <./Figures/EIAH.png>] [12]
<./Figures/cycle.png, id=603, 668.4975pt x 665.48625pt> 1282 1282 <./Figures/cycle.png, id=603, 668.4975pt x 665.48625pt>
File: ./Figures/cycle.png Graphic file (type png) 1283 1283 File: ./Figures/cycle.png Graphic file (type png)
<use ./Figures/cycle.png> 1284 1284 <use ./Figures/cycle.png>
Package pdftex.def Info: ./Figures/cycle.png used on input line 83. 1285 1285 Package pdftex.def Info: ./Figures/cycle.png used on input line 83.
(pdftex.def) Requested size: 427.43153pt x 425.51372pt. 1286 1286 (pdftex.def) Requested size: 427.43153pt x 425.51372pt.
[13 <./Figures/cycle.png>] 1287 1287 [13 <./Figures/cycle.png>]
<./Figures/Reuse.png, id=625, 383.4325pt x 182.6825pt> 1288 1288 <./Figures/Reuse.png, id=625, 383.4325pt x 182.6825pt>
File: ./Figures/Reuse.png Graphic file (type png) 1289 1289 File: ./Figures/Reuse.png Graphic file (type png)
<use ./Figures/Reuse.png> 1290 1290 <use ./Figures/Reuse.png>
Package pdftex.def Info: ./Figures/Reuse.png used on input line 112. 1291 1291 Package pdftex.def Info: ./Figures/Reuse.png used on input line 112.
(pdftex.def) Requested size: 299.20076pt x 142.55865pt. 1292 1292 (pdftex.def) Requested size: 299.20076pt x 142.55865pt.
1293 1293
Underfull \hbox (badness 10000) in paragraph at lines 112--112 1294 1294 Underfull \hbox (badness 10000) in paragraph at lines 112--112
[]\T1/phv/m/sc/10.95 Figure 2.4 \T1/phv/m/n/10.95 ^^U |Prin-cipe de réuti-li-sa 1295 1295 []\T1/phv/m/sc/10.95 Figure 2.4 \T1/phv/m/n/10.95 ^^U |Prin-cipe de réuti-li-sa
-tion dans le RàPC (Tra-duit de 1296 1296 -tion dans le RàPC (Tra-duit de
[] 1297 1297 []
1298 1298
[14] 1299 1299 [14]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1300 1300 Underfull \vbox (badness 10000) has occurred while \output is active []
1301 1301
[15 <./Figures/Reuse.png>] 1302 1302 [15 <./Figures/Reuse.png>]
<./Figures/CycleCBR.png, id=646, 147.1899pt x 83.8332pt> 1303 1303 <./Figures/CycleCBR.png, id=646, 147.1899pt x 83.8332pt>
File: ./Figures/CycleCBR.png Graphic file (type png) 1304 1304 File: ./Figures/CycleCBR.png Graphic file (type png)
<use ./Figures/CycleCBR.png> 1305 1305 <use ./Figures/CycleCBR.png>
Package pdftex.def Info: ./Figures/CycleCBR.png used on input line 156. 1306 1306 Package pdftex.def Info: ./Figures/CycleCBR.png used on input line 156.
(pdftex.def) Requested size: 427.43153pt x 243.45026pt. 1307 1307 (pdftex.def) Requested size: 427.43153pt x 243.45026pt.
1308 1308
Underfull \vbox (badness 10000) has occurred while \output is active [] 1309 1309 Underfull \vbox (badness 10000) has occurred while \output is active []
1310 1310
[16 <./Figures/CycleCBR.png>] 1311 1311 [16 <./Figures/CycleCBR.png>]
Underfull \vbox (badness 3189) has occurred while \output is active [] 1312 1312 Underfull \vbox (badness 3189) has occurred while \output is active []
1313 1313
[17] 1314 1314 [17]
[18] 1315 1315 [18]
1316 1316
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1317 1317 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1318 1318 65.
1319 1319
LaTeX Font Info: Trying to load font information for TS1+phv on input line 2 1320 1320 LaTeX Font Info: Trying to load font information for TS1+phv on input line 2
65. 1321 1321 65.
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/ts1phv.fd 1322 1322 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/ts1phv.fd
File: ts1phv.fd 2020/03/25 scalable font definitions for TS1/phv. 1323 1323 File: ts1phv.fd 2020/03/25 scalable font definitions for TS1/phv.
) 1324 1324 )
1325 1325
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1326 1326 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1327 1327 65.
1328 1328
1329 1329
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1330 1330 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1331 1331 65.
1332 1332
1333 1333
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1334 1334 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1335 1335 65.
1336 1336
1337 1337
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1338 1338 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1339 1339 65.
1340 1340
1341 1341
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1342 1342 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1343 1343 65.
1344 1344
Missing character: There is no · in font txr! 1345 1345 Missing character: There is no · in font txr!
Missing character: There is no · in font txr! 1346 1346 Missing character: There is no · in font txr!
Missing character: There is no · in font txr! 1347 1347 Missing character: There is no · in font txr!
1348 1348
LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined 1349 1349 LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined
(Font) using `T1/phv/m/it' instead on input line 284. 1350 1350 (Font) using `T1/phv/m/it' instead on input line 284.
1351 1351
[19] [20] 1352 1352 [19] [20]
1353 1353
LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined 1354 1354 LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined
(Font) using `T1/phv/m/it' instead on input line 333. 1355 1355 (Font) using `T1/phv/m/it' instead on input line 333.
1356 1356
1357 1357
LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined 1358 1358 LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined
(Font) using `T1/phv/m/it' instead on input line 337. 1359 1359 (Font) using `T1/phv/m/it' instead on input line 337.
1360 1360
<./Figures/beta-distribution.png, id=722, 621.11293pt x 480.07928pt> 1361 1361 <./Figures/beta-distribution.png, id=722, 621.11293pt x 480.07928pt>
File: ./Figures/beta-distribution.png Graphic file (type png) 1362 1362 File: ./Figures/beta-distribution.png Graphic file (type png)
<use ./Figures/beta-distribution.png> 1363 1363 <use ./Figures/beta-distribution.png>
Package pdftex.def Info: ./Figures/beta-distribution.png used on input line 34 1364 1364 Package pdftex.def Info: ./Figures/beta-distribution.png used on input line 34
5. 1365 1365 5.
(pdftex.def) Requested size: 427.43153pt x 330.38333pt. 1366 1366 (pdftex.def) Requested size: 427.43153pt x 330.38333pt.
[21]) [22 <./Figures/beta-distribution.png>] 1367 1367 [21]) [22 <./Figures/beta-distribution.png>]
\openout2 = `./chapters/EIAH.aux'. 1368 1368 \openout2 = `./chapters/EIAH.aux'.
1369 1369
(./chapters/EIAH.tex 1370 1370 (./chapters/EIAH.tex
Chapitre 3. 1371 1371 Chapitre 3.
[23 1372 1372 [23
1373 1373
1374 1374
1375 1375
1376 1376
] 1377 1377 ]
Underfull \hbox (badness 10000) in paragraph at lines 24--25 1378 1378 Underfull \hbox (badness 10000) in paragraph at lines 24--25
[]\T1/phv/m/n/10.95 Les tech-niques d'IA peuvent aussi ai-der à prendre des dé- 1379 1379 []\T1/phv/m/n/10.95 Les tech-niques d'IA peuvent aussi ai-der à prendre des dé-
ci-sions stra-té- 1380 1380 ci-sions stra-té-
[] 1381 1381 []
1382 1382
1383 1383
Underfull \hbox (badness 1874) in paragraph at lines 24--25 1384 1384 Underfull \hbox (badness 1874) in paragraph at lines 24--25
\T1/phv/m/n/10.95 giques vi-sant des ob-jec-tifs à longue échéance comme le mon 1385 1385 \T1/phv/m/n/10.95 giques vi-sant des ob-jec-tifs à longue échéance comme le mon
tre le tra-vail de 1386 1386 tre le tra-vail de
[] 1387 1387 []
1388 1388
<./Figures/architecture.png, id=752, 776.9025pt x 454.69875pt> 1389 1389 <./Figures/architecture.png, id=752, 776.9025pt x 454.69875pt>
File: ./Figures/architecture.png Graphic file (type png) 1390 1390 File: ./Figures/architecture.png Graphic file (type png)
<use ./Figures/architecture.png> 1391 1391 <use ./Figures/architecture.png>
Package pdftex.def Info: ./Figures/architecture.png used on input line 38. 1392 1392 Package pdftex.def Info: ./Figures/architecture.png used on input line 38.
(pdftex.def) Requested size: 427.43153pt x 250.16833pt. 1393 1393 (pdftex.def) Requested size: 427.43153pt x 250.16833pt.
[24] 1394 1394 [24]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1395 1395 Underfull \vbox (badness 10000) has occurred while \output is active []
1396 1396
[25 <./Figures/architecture.png>] 1397 1397 [25 <./Figures/architecture.png>]
<./Figures/ELearningLevels.png, id=781, 602.25pt x 612.78937pt> 1398 1398 <./Figures/ELearningLevels.png, id=781, 602.25pt x 612.78937pt>
File: ./Figures/ELearningLevels.png Graphic file (type png) 1399 1399 File: ./Figures/ELearningLevels.png Graphic file (type png)
<use ./Figures/ELearningLevels.png> 1400 1400 <use ./Figures/ELearningLevels.png>
Package pdftex.def Info: ./Figures/ELearningLevels.png used on input line 62. 1401 1401 Package pdftex.def Info: ./Figures/ELearningLevels.png used on input line 62.
(pdftex.def) Requested size: 427.43153pt x 434.92455pt. 1402 1402 (pdftex.def) Requested size: 427.43153pt x 434.92455pt.
1403 1403
Underfull \hbox (badness 3690) in paragraph at lines 62--62 1404 1404 Underfull \hbox (badness 3690) in paragraph at lines 62--62
[]\T1/phv/m/sc/10.95 Figure 3.2 \T1/phv/m/n/10.95 ^^U |Tra-duc-tion des ni-veau 1405 1405 []\T1/phv/m/sc/10.95 Figure 3.2 \T1/phv/m/n/10.95 ^^U |Tra-duc-tion des ni-veau
x du sys-tème de re-com-man-da-tion dans 1406 1406 x du sys-tème de re-com-man-da-tion dans
[] 1407 1407 []
1408 1408
1409 1409
Underfull \vbox (badness 10000) has occurred while \output is active [] 1410 1410 Underfull \vbox (badness 10000) has occurred while \output is active []
1411 1411
[26] 1412 1412 [26]
Overfull \hbox (2.56369pt too wide) in paragraph at lines 82--82 1413 1413 Overfull \hbox (2.56369pt too wide) in paragraph at lines 82--82
[]|\T1/phv/m/n/9 [[]]| 1414 1414 []|\T1/phv/m/n/9 [[]]|
[] 1415 1415 []
1416 1416
1417 1417
Overfull \hbox (0.5975pt too wide) in paragraph at lines 77--93 1418 1418 Overfull \hbox (0.5975pt too wide) in paragraph at lines 77--93
[][] 1419 1419 [][]
[] 1420 1420 []
1421 1421
) [27 <./Figures/ELearningLevels.png>] [28] 1422 1422 ) [27 <./Figures/ELearningLevels.png>] [28]
\openout2 = `./chapters/CBR.aux'. 1423 1423 \openout2 = `./chapters/CBR.aux'.
1424 1424
(./chapters/CBR.tex 1425 1425 (./chapters/CBR.tex
Chapitre 4. 1426 1426 Chapitre 4.
[29 1427 1427 [29
1428 1428
1429 1429
1430 1430
1431 1431
] [30] 1432 1432 ] [30]
Underfull \hbox (badness 1048) in paragraph at lines 26--27 1433 1433 Underfull \hbox (badness 1048) in paragraph at lines 26--27
[]\T1/phv/m/n/10.95 [[]] uti-lisent éga-le-ment le RàPC pour sé-lec-tion-ner la 1434 1434 []\T1/phv/m/n/10.95 [[]] uti-lisent éga-le-ment le RàPC pour sé-lec-tion-ner la
1435 1435
[] 1436 1436 []
1437 1437
1438 1438
Overfull \hbox (24.44536pt too wide) has occurred while \output is active 1439 1439 Overfull \hbox (24.44536pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 4.3. TRAVAUX RÉCENTS SUR LA REPRÉSENTATION DES CAS ET LE CY 1440 1440 \T1/phv/m/sl/10.95 4.3. TRAVAUX RÉCENTS SUR LA REPRÉSENTATION DES CAS ET LE CY
CLE DU RÀPC \T1/phv/m/n/10.95 31 1441 1441 CLE DU RÀPC \T1/phv/m/n/10.95 31
[] 1442 1442 []
1443 1443
[31] 1444 1444 [31]
<./Figures/ModCBR2.png, id=854, 1145.27875pt x 545.03625pt> 1445 1445 <./Figures/ModCBR2.png, id=854, 1145.27875pt x 545.03625pt>
File: ./Figures/ModCBR2.png Graphic file (type png) 1446 1446 File: ./Figures/ModCBR2.png Graphic file (type png)
<use ./Figures/ModCBR2.png> 1447 1447 <use ./Figures/ModCBR2.png>
Package pdftex.def Info: ./Figures/ModCBR2.png used on input line 40. 1448 1448 Package pdftex.def Info: ./Figures/ModCBR2.png used on input line 40.
(pdftex.def) Requested size: 427.43153pt x 203.41505pt. 1449 1449 (pdftex.def) Requested size: 427.43153pt x 203.41505pt.
<./Figures/ModCBR1.png, id=859, 942.52126pt x 624.83438pt> 1450 1450 <./Figures/ModCBR1.png, id=859, 942.52126pt x 624.83438pt>
File: ./Figures/ModCBR1.png Graphic file (type png) 1451 1451 File: ./Figures/ModCBR1.png Graphic file (type png)
<use ./Figures/ModCBR1.png> 1452 1452 <use ./Figures/ModCBR1.png>
Package pdftex.def Info: ./Figures/ModCBR1.png used on input line 46. 1453 1453 Package pdftex.def Info: ./Figures/ModCBR1.png used on input line 46.
(pdftex.def) Requested size: 427.43153pt x 283.36574pt. 1454 1454 (pdftex.def) Requested size: 427.43153pt x 283.36574pt.
[32 <./Figures/ModCBR2.png>] [33 <./Figures/ModCBR1.png>] [34] 1455 1455 [32 <./Figures/ModCBR2.png>] [33 <./Figures/ModCBR1.png>] [34]
<./Figures/taxonomieEIAH.png, id=899, 984.67876pt x 614.295pt> 1456 1456 <./Figures/taxonomieEIAH.png, id=899, 984.67876pt x 614.295pt>
File: ./Figures/taxonomieEIAH.png Graphic file (type png) 1457 1457 File: ./Figures/taxonomieEIAH.png Graphic file (type png)
<use ./Figures/taxonomieEIAH.png> 1458 1458 <use ./Figures/taxonomieEIAH.png>
Package pdftex.def Info: ./Figures/taxonomieEIAH.png used on input line 82. 1459 1459 Package pdftex.def Info: ./Figures/taxonomieEIAH.png used on input line 82.
(pdftex.def) Requested size: 427.43153pt x 266.65376pt. 1460 1460 (pdftex.def) Requested size: 427.43153pt x 266.65376pt.
1461 1461
Underfull \hbox (badness 1895) in paragraph at lines 91--91 1462 1462 Underfull \hbox (badness 1895) in paragraph at lines 91--91
[][]\T1/phv/m/sc/14.4 Récapitulatif des li-mites des tra-vaux pré-sen-tés 1463 1463 [][]\T1/phv/m/sc/14.4 Récapitulatif des li-mites des tra-vaux pré-sen-tés
[] 1464 1464 []
1465 1465
1466 1466
Underfull \vbox (badness 10000) has occurred while \output is active [] 1467 1467 Underfull \vbox (badness 10000) has occurred while \output is active []
1468 1468
[35] 1469 1469 [35]
Overfull \hbox (2.19226pt too wide) in paragraph at lines 109--109 1470 1470 Overfull \hbox (2.19226pt too wide) in paragraph at lines 109--109
[]|\T1/phv/m/n/9 [[]]| 1471 1471 []|\T1/phv/m/n/9 [[]]|
[] 1472 1472 []
1473 1473
1474 1474
Overfull \hbox (8.65419pt too wide) in paragraph at lines 115--115 1475 1475 Overfull \hbox (8.65419pt too wide) in paragraph at lines 115--115
[]|\T1/phv/m/n/9 [[]]| 1476 1476 []|\T1/phv/m/n/9 [[]]|
[] 1477 1477 []
1478 1478
1479 1479
Overfull \hbox (1.23834pt too wide) in paragraph at lines 135--135 1480 1480 Overfull \hbox (1.23834pt too wide) in paragraph at lines 135--135
[]|\T1/phv/m/n/9 [[]]| 1481 1481 []|\T1/phv/m/n/9 [[]]|
[] 1482 1482 []
1483 1483
1484 1484
Overfull \hbox (7.38495pt too wide) in paragraph at lines 143--143 1485 1485 Overfull \hbox (7.38495pt too wide) in paragraph at lines 143--143
[]|\T1/phv/m/n/9 [[]]| 1486 1486 []|\T1/phv/m/n/9 [[]]|
[] 1487 1487 []
1488 1488
) [36 <./Figures/taxonomieEIAH.png>] 1489 1489 ) [36 <./Figures/taxonomieEIAH.png>]
Overfull \hbox (14.11055pt too wide) has occurred while \output is active 1490 1490 Overfull \hbox (14.11055pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 4.7. RÉCAPITULATIF DES LIMITES DES TRAVAUX PRÉSENTÉS DANS C 1491 1491 \T1/phv/m/sl/10.95 4.7. RÉCAPITULATIF DES LIMITES DES TRAVAUX PRÉSENTÉS DANS C
E CHAPITRE \T1/phv/m/n/10.95 37 1492 1492 E CHAPITRE \T1/phv/m/n/10.95 37
[] 1493 1493 []
1494 1494
[37] [38 1495 1495 [37] [38
1496 1496
1497 1497
1498 1498
] [39] [40] 1499 1499 ] [39] [40]
\openout2 = `./chapters/Architecture.aux'. 1500 1500 \openout2 = `./chapters/Architecture.aux'.
1501 1501
(./chapters/Architecture.tex 1502 1502 (./chapters/Architecture.tex
Chapitre 5. 1503 1503 Chapitre 5.
1504 1504
Underfull \vbox (badness 10000) has occurred while \output is active [] 1505 1505 Underfull \vbox (badness 10000) has occurred while \output is active []
1506 1506
[41 1507 1507 [41
1508 1508
1509 1509
] 1510 1510 ]
<./Figures/AIVT.png, id=977, 1116.17pt x 512.91624pt> 1511 1511 <./Figures/AIVT.png, id=977, 1116.17pt x 512.91624pt>
File: ./Figures/AIVT.png Graphic file (type png) 1512 1512 File: ./Figures/AIVT.png Graphic file (type png)
<use ./Figures/AIVT.png> 1513 1513 <use ./Figures/AIVT.png>
Package pdftex.def Info: ./Figures/AIVT.png used on input line 23. 1514 1514 Package pdftex.def Info: ./Figures/AIVT.png used on input line 23.
(pdftex.def) Requested size: 427.43153pt x 196.41287pt. 1515 1515 (pdftex.def) Requested size: 427.43153pt x 196.41287pt.
1516 1516
[42 <./Figures/AIVT.png>] 1517 1517 [42 <./Figures/AIVT.png>]
Underfull \hbox (badness 3049) in paragraph at lines 44--45 1518 1518 Underfull \hbox (badness 3049) in paragraph at lines 44--45
[]|\T1/phv/m/n/10.95 Discipline des in-for-ma-tions conte- 1519 1519 []|\T1/phv/m/n/10.95 Discipline des in-for-ma-tions conte-
[] 1520 1520 []
1521 1521
1522 1522
Underfull \hbox (badness 2435) in paragraph at lines 46--46 1523 1523 Underfull \hbox (badness 2435) in paragraph at lines 46--46
[]|\T1/phv/m/n/10.95 Le ni-veau sco-laire de la ma-tière 1524 1524 []|\T1/phv/m/n/10.95 Le ni-veau sco-laire de la ma-tière
[] 1525 1525 []
1526 1526
1527 1527
Underfull \hbox (badness 7468) in paragraph at lines 47--48 1528 1528 Underfull \hbox (badness 7468) in paragraph at lines 47--48
[]|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis- 1529 1529 []|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis-
[] 1530 1530 []
1531 1531
1532 1532
Underfull \hbox (badness 7468) in paragraph at lines 48--49 1533 1533 Underfull \hbox (badness 7468) in paragraph at lines 48--49
[]|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis- 1534 1534 []|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis-
[] 1535 1535 []
1536 1536
1537 1537
Underfull \hbox (badness 5050) in paragraph at lines 52--52 1538 1538 Underfull \hbox (badness 5050) in paragraph at lines 52--52
[]|\T1/phv/m/n/10.95 Le type d'in-for-ma-tions conte-nues 1539 1539 []|\T1/phv/m/n/10.95 Le type d'in-for-ma-tions conte-nues
[] 1540 1540 []
1541 1541
1542 1542
Underfull \hbox (badness 10000) in paragraph at lines 54--55 1543 1543 Underfull \hbox (badness 10000) in paragraph at lines 54--55
[]|\T1/phv/m/n/10.95 Connaissances et 1544 1544 []|\T1/phv/m/n/10.95 Connaissances et
[] 1545 1545 []
1546 1546
1547 1547
Overfull \hbox (1.98096pt too wide) in paragraph at lines 57--57 1548 1548 Overfull \hbox (1.98096pt too wide) in paragraph at lines 57--57
[]|\T1/phv/m/n/10.95 Représentation 1549 1549 []|\T1/phv/m/n/10.95 Représentation
[] 1550 1550 []
1551 1551
1552 1552
Overfull \hbox (1.98096pt too wide) in paragraph at lines 58--58 1553 1553 Overfull \hbox (1.98096pt too wide) in paragraph at lines 58--58
[]|\T1/phv/m/n/10.95 Représentation 1554 1554 []|\T1/phv/m/n/10.95 Représentation
[] 1555 1555 []
1556 1556
1557 1557
Underfull \hbox (badness 10000) in paragraph at lines 59--60 1558 1558 Underfull \hbox (badness 10000) in paragraph at lines 59--60
[]|\T1/phv/m/n/10.95 Représentation tex- 1559 1559 []|\T1/phv/m/n/10.95 Représentation tex-
[] 1560 1560 []
1561 1561
1562 1562
Underfull \hbox (badness 10000) in paragraph at lines 59--60 1563 1563 Underfull \hbox (badness 10000) in paragraph at lines 59--60
\T1/phv/m/n/10.95 tuel et gra-phique 1564 1564 \T1/phv/m/n/10.95 tuel et gra-phique
[] 1565 1565 []
1566 1566
1567 1567
Underfull \hbox (badness 2343) in paragraph at lines 63--64 1568 1568 Underfull \hbox (badness 2343) in paragraph at lines 63--64
[]|\T1/phv/m/n/10.95 Ordinateur ou ap-pa- 1569 1569 []|\T1/phv/m/n/10.95 Ordinateur ou ap-pa-
[] 1570 1570 []
1571 1571
1572 1572
Underfull \vbox (badness 10000) has occurred while \output is active [] 1573 1573 Underfull \vbox (badness 10000) has occurred while \output is active []
1574 1574
[43] 1575 1575 [43]
<./Figures/Architecture AI-VT2.png, id=993, 1029.8475pt x 948.54375pt> 1576 1576 <./Figures/Architecture AI-VT2.png, id=993, 1029.8475pt x 948.54375pt>
File: ./Figures/Architecture AI-VT2.png Graphic file (type png) 1577 1577 File: ./Figures/Architecture AI-VT2.png Graphic file (type png)
<use ./Figures/Architecture AI-VT2.png> 1578 1578 <use ./Figures/Architecture AI-VT2.png>
Package pdftex.def Info: ./Figures/Architecture AI-VT2.png used on input line 1579 1579 Package pdftex.def Info: ./Figures/Architecture AI-VT2.png used on input line
80. 1580 1580 80.
(pdftex.def) Requested size: 427.43153pt x 393.68173pt. 1581 1581 (pdftex.def) Requested size: 427.43153pt x 393.68173pt.
1582 1582
Underfull \vbox (badness 10000) has occurred while \output is active [] 1583 1583 Underfull \vbox (badness 10000) has occurred while \output is active []
1584 1584
[44] 1585 1585 [44]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1586 1586 Underfull \vbox (badness 10000) has occurred while \output is active []
1587 1587
[45 <./Figures/Architecture AI-VT2.png>] 1588 1588 [45 <./Figures/Architecture AI-VT2.png>]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1589 1589 Underfull \vbox (badness 10000) has occurred while \output is active []
1590 1590
[46] 1591 1591 [46]
[47] [48] 1592 1592 [47] [48]
<./Figures/Layers.png, id=1020, 392.46625pt x 216.81pt> 1593 1593 <./Figures/Layers.png, id=1020, 392.46625pt x 216.81pt>
File: ./Figures/Layers.png Graphic file (type png) 1594 1594 File: ./Figures/Layers.png Graphic file (type png)
<use ./Figures/Layers.png> 1595 1595 <use ./Figures/Layers.png>
Package pdftex.def Info: ./Figures/Layers.png used on input line 153. 1596 1596 Package pdftex.def Info: ./Figures/Layers.png used on input line 153.
(pdftex.def) Requested size: 313.9734pt x 173.44823pt. 1597 1597 (pdftex.def) Requested size: 313.9734pt x 173.44823pt.
<./Figures/flow.png, id=1022, 721.69624pt x 593.21625pt> 1598 1598 <./Figures/flow.png, id=1022, 721.69624pt x 593.21625pt>
File: ./Figures/flow.png Graphic file (type png) 1599 1599 File: ./Figures/flow.png Graphic file (type png)
<use ./Figures/flow.png> 1600 1600 <use ./Figures/flow.png>
Package pdftex.def Info: ./Figures/flow.png used on input line 164. 1601 1601 Package pdftex.def Info: ./Figures/flow.png used on input line 164.
(pdftex.def) Requested size: 427.43153pt x 351.33421pt. 1602 1602 (pdftex.def) Requested size: 427.43153pt x 351.33421pt.
) [49 <./Figures/Layers.png>] [50 <./Figures/flow.png>] 1603 1603 ) [49 <./Figures/Layers.png>] [50 <./Figures/flow.png>]
\openout2 = `./chapters/ESCBR.aux'. 1604 1604 \openout2 = `./chapters/ESCBR.aux'.
1605 1605
1606 1606
(./chapters/ESCBR.tex 1607 1607 (./chapters/ESCBR.tex
Chapitre 6. 1608 1608 Chapitre 6.
1609 1609
Underfull \hbox (badness 1552) in paragraph at lines 7--9 1610 1610 Underfull \hbox (badness 1552) in paragraph at lines 7--9
\T1/phv/m/n/10.95 multi-agents cog-ni-tifs im-plé-men-tant un rai-son-ne-ment b 1611 1611 \T1/phv/m/n/10.95 multi-agents cog-ni-tifs im-plé-men-tant un rai-son-ne-ment b
ayé-sien. Cette as-so-cia-tion, 1612 1612 ayé-sien. Cette as-so-cia-tion,
[] 1613 1613 []
1614 1614
1615 1615
Underfull \hbox (badness 10000) in paragraph at lines 7--9 1616 1616 Underfull \hbox (badness 10000) in paragraph at lines 7--9
1617 1617
[] 1618 1618 []
1619 1619
[51 1620 1620 [51
1621 1621
1622 1622
1623 1623
1624 1624
] 1625 1625 ]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1626 1626 Underfull \vbox (badness 10000) has occurred while \output is active []
1627 1627
[52] 1628 1628 [52]
<./Figures/NCBR0.png, id=1065, 623.32875pt x 459.7175pt> 1629 1629 <./Figures/NCBR0.png, id=1065, 623.32875pt x 459.7175pt>
File: ./Figures/NCBR0.png Graphic file (type png) 1630 1630 File: ./Figures/NCBR0.png Graphic file (type png)
<use ./Figures/NCBR0.png> 1631 1631 <use ./Figures/NCBR0.png>
Package pdftex.def Info: ./Figures/NCBR0.png used on input line 33. 1632 1632 Package pdftex.def Info: ./Figures/NCBR0.png used on input line 33.
(pdftex.def) Requested size: 427.43153pt x 315.24129pt. 1633 1633 (pdftex.def) Requested size: 427.43153pt x 315.24129pt.
1634 1634
[53 <./Figures/NCBR0.png>] 1635 1635 [53 <./Figures/NCBR0.png>]
<./Figures/FlowCBR0.png, id=1076, 370.38374pt x 661.47125pt> 1636 1636 <./Figures/FlowCBR0.png, id=1076, 370.38374pt x 661.47125pt>
File: ./Figures/FlowCBR0.png Graphic file (type png) 1637 1637 File: ./Figures/FlowCBR0.png Graphic file (type png)
<use ./Figures/FlowCBR0.png> 1638 1638 <use ./Figures/FlowCBR0.png>
Package pdftex.def Info: ./Figures/FlowCBR0.png used on input line 42. 1639 1639 Package pdftex.def Info: ./Figures/FlowCBR0.png used on input line 42.
(pdftex.def) Requested size: 222.23195pt x 396.8858pt. 1640 1640 (pdftex.def) Requested size: 222.23195pt x 396.8858pt.
[54 <./Figures/FlowCBR0.png>] 1641 1641 [54 <./Figures/FlowCBR0.png>]
<./Figures/Stacking1.png, id=1085, 743.77875pt x 414.54875pt> 1642 1642 <./Figures/Stacking1.png, id=1085, 743.77875pt x 414.54875pt>
File: ./Figures/Stacking1.png Graphic file (type png) 1643 1643 File: ./Figures/Stacking1.png Graphic file (type png)
<use ./Figures/Stacking1.png> 1644 1644 <use ./Figures/Stacking1.png>
Package pdftex.def Info: ./Figures/Stacking1.png used on input line 81. 1645 1645 Package pdftex.def Info: ./Figures/Stacking1.png used on input line 81.
(pdftex.def) Requested size: 427.43153pt x 238.23717pt. 1646 1646 (pdftex.def) Requested size: 427.43153pt x 238.23717pt.
[55] 1647 1647 [55]
<./Figures/SolRep.png, id=1096, 277.035pt x 84.315pt> 1648 1648 <./Figures/SolRep.png, id=1096, 277.035pt x 84.315pt>
File: ./Figures/SolRep.png Graphic file (type png) 1649 1649 File: ./Figures/SolRep.png Graphic file (type png)
<use ./Figures/SolRep.png> 1650 1650 <use ./Figures/SolRep.png>
Package pdftex.def Info: ./Figures/SolRep.png used on input line 95. 1651 1651 Package pdftex.def Info: ./Figures/SolRep.png used on input line 95.
(pdftex.def) Requested size: 277.03432pt x 84.31477pt. 1652 1652 (pdftex.def) Requested size: 277.03432pt x 84.31477pt.
<./Figures/AutomaticS.png, id=1097, 688.5725pt x 548.0475pt> 1653 1653 <./Figures/AutomaticS.png, id=1097, 688.5725pt x 548.0475pt>
File: ./Figures/AutomaticS.png Graphic file (type png) 1654 1654 File: ./Figures/AutomaticS.png Graphic file (type png)
<use ./Figures/AutomaticS.png> 1655 1655 <use ./Figures/AutomaticS.png>
Package pdftex.def Info: ./Figures/AutomaticS.png used on input line 104. 1656 1656 Package pdftex.def Info: ./Figures/AutomaticS.png used on input line 104.
(pdftex.def) Requested size: 427.43153pt x 340.20406pt. 1657 1657 (pdftex.def) Requested size: 427.43153pt x 340.20406pt.
1658 1658
Underfull \vbox (badness 10000) has occurred while \output is active [] 1659 1659 Underfull \vbox (badness 10000) has occurred while \output is active []
1660 1660
[56 <./Figures/Stacking1.png> <./Figures/SolRep.png>] [57 <./Figures/Automatic 1661 1661 [56 <./Figures/Stacking1.png> <./Figures/SolRep.png>] [57 <./Figures/Automatic
S.png>] 1662 1662 S.png>]
[58] 1663 1663 [58]
<./Figures/Stacking2.png, id=1134, 743.77875pt x 414.54875pt> 1664 1664 <./Figures/Stacking2.png, id=1134, 743.77875pt x 414.54875pt>
File: ./Figures/Stacking2.png Graphic file (type png) 1665 1665 File: ./Figures/Stacking2.png Graphic file (type png)
<use ./Figures/Stacking2.png> 1666 1666 <use ./Figures/Stacking2.png>
Package pdftex.def Info: ./Figures/Stacking2.png used on input line 191. 1667 1667 Package pdftex.def Info: ./Figures/Stacking2.png used on input line 191.
(pdftex.def) Requested size: 427.43153pt x 238.23717pt. 1668 1668 (pdftex.def) Requested size: 427.43153pt x 238.23717pt.
1669 1669
Underfull \hbox (badness 10000) in paragraph at lines 202--203 1670 1670 Underfull \hbox (badness 10000) in paragraph at lines 202--203
1671 1671
[] 1672 1672 []
1673 1673
[59 <./Figures/Stacking2.png>] 1674 1674 [59 <./Figures/Stacking2.png>]
<Figures/FW.png, id=1150, 456.70625pt x 342.27875pt> 1675 1675 <Figures/FW.png, id=1150, 456.70625pt x 342.27875pt>
File: Figures/FW.png Graphic file (type png) 1676 1676 File: Figures/FW.png Graphic file (type png)
<use Figures/FW.png> 1677 1677 <use Figures/FW.png>
Package pdftex.def Info: Figures/FW.png used on input line 216. 1678 1678 Package pdftex.def Info: Figures/FW.png used on input line 216.
(pdftex.def) Requested size: 427.43153pt x 320.34758pt. 1679 1679 (pdftex.def) Requested size: 427.43153pt x 320.34758pt.
[60 <./Figures/FW.png>] [61] 1680 1680 [60 <./Figures/FW.png>] [61]
<./Figures/boxplot.png, id=1171, 1994.45125pt x 959.585pt> 1681 1681 <./Figures/boxplot.png, id=1171, 1994.45125pt x 959.585pt>
File: ./Figures/boxplot.png Graphic file (type png) 1682 1682 File: ./Figures/boxplot.png Graphic file (type png)
<use ./Figures/boxplot.png> 1683 1683 <use ./Figures/boxplot.png>
Package pdftex.def Info: ./Figures/boxplot.png used on input line 321. 1684 1684 Package pdftex.def Info: ./Figures/boxplot.png used on input line 321.
(pdftex.def) Requested size: 427.43153pt x 205.64786pt. 1685 1685 (pdftex.def) Requested size: 427.43153pt x 205.64786pt.
[62] 1686 1686 [62]
Underfull \hbox (badness 10000) in paragraph at lines 340--341 1687 1687 Underfull \hbox (badness 10000) in paragraph at lines 340--341
1688 1688
[] 1689 1689 []
1690 1690
1691 1691
Underfull \hbox (badness 2564) in paragraph at lines 342--342 1692 1692 Underfull \hbox (badness 2564) in paragraph at lines 342--342
[][]\T1/phv/m/sc/14.4 ESCBR-SMA : In-tro-duc-tion des sys-tèmes multi- 1693 1693 [][]\T1/phv/m/sc/14.4 ESCBR-SMA : In-tro-duc-tion des sys-tèmes multi-
[] 1694 1694 []
1695 1695
1696 1696
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1697 1697 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1698 1698 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 63 1699 1699 S ESCBR \T1/phv/m/n/10.95 63
[] 1700 1700 []
1701 1701
[63 <./Figures/boxplot.png>] 1702 1702 [63 <./Figures/boxplot.png>]
<Figures/NCBR.png, id=1182, 653.44125pt x 445.665pt> 1703 1703 <Figures/NCBR.png, id=1182, 653.44125pt x 445.665pt>
File: Figures/NCBR.png Graphic file (type png) 1704 1704 File: Figures/NCBR.png Graphic file (type png)
<use Figures/NCBR.png> 1705 1705 <use Figures/NCBR.png>
Package pdftex.def Info: Figures/NCBR.png used on input line 352. 1706 1706 Package pdftex.def Info: Figures/NCBR.png used on input line 352.
(pdftex.def) Requested size: 427.43153pt x 291.5149pt. 1707 1707 (pdftex.def) Requested size: 427.43153pt x 291.5149pt.
[64 <./Figures/NCBR.png>] 1708 1708 [64 <./Figures/NCBR.png>]
<Figures/FlowCBR.png, id=1192, 450.68375pt x 822.07124pt> 1709 1709 <Figures/FlowCBR.png, id=1192, 450.68375pt x 822.07124pt>
File: Figures/FlowCBR.png Graphic file (type png) 1710 1710 File: Figures/FlowCBR.png Graphic file (type png)
<use Figures/FlowCBR.png> 1711 1711 <use Figures/FlowCBR.png>
Package pdftex.def Info: Figures/FlowCBR.png used on input line 381. 1712 1712 Package pdftex.def Info: Figures/FlowCBR.png used on input line 381.
(pdftex.def) Requested size: 270.41232pt x 493.24655pt. 1713 1713 (pdftex.def) Requested size: 270.41232pt x 493.24655pt.
1714 1714
Underfull \hbox (badness 1107) in paragraph at lines 414--415 1715 1715 Underfull \hbox (badness 1107) in paragraph at lines 414--415
[]\T1/phv/m/n/10.95 Cette sec-tion pré-sente de ma-nière plus dé-taillée les co 1716 1716 []\T1/phv/m/n/10.95 Cette sec-tion pré-sente de ma-nière plus dé-taillée les co
m-por-te-ments des agents 1717 1717 m-por-te-ments des agents
[] 1718 1718 []
1719 1719
1720 1720
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1721 1721 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1722 1722 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 65 1723 1723 S ESCBR \T1/phv/m/n/10.95 65
[] 1724 1724 []
1725 1725
[65] 1726 1726 [65]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1727 1727 Underfull \vbox (badness 10000) has occurred while \output is active []
1728 1728
[66 <./Figures/FlowCBR.png>] 1729 1729 [66 <./Figures/FlowCBR.png>]
<Figures/agent.png, id=1208, 352.31625pt x 402.50375pt> 1730 1730 <Figures/agent.png, id=1208, 352.31625pt x 402.50375pt>
File: Figures/agent.png Graphic file (type png) 1731 1731 File: Figures/agent.png Graphic file (type png)
<use Figures/agent.png> 1732 1732 <use Figures/agent.png>
Package pdftex.def Info: Figures/agent.png used on input line 455. 1733 1733 Package pdftex.def Info: Figures/agent.png used on input line 455.
(pdftex.def) Requested size: 246.61969pt x 281.7507pt. 1734 1734 (pdftex.def) Requested size: 246.61969pt x 281.7507pt.
1735 1735
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1736 1736 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1737 1737 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 67 1738 1738 S ESCBR \T1/phv/m/n/10.95 67
[] 1739 1739 []
1740 1740
[67] 1741 1741 [67]
<Figures/BayesianEvolution.png, id=1222, 626.34pt x 402.50375pt> 1742 1742 <Figures/BayesianEvolution.png, id=1222, 626.34pt x 402.50375pt>
File: Figures/BayesianEvolution.png Graphic file (type png) 1743 1743 File: Figures/BayesianEvolution.png Graphic file (type png)
<use Figures/BayesianEvolution.png> 1744 1744 <use Figures/BayesianEvolution.png>
Package pdftex.def Info: Figures/BayesianEvolution.png used on input line 468. 1745 1745 Package pdftex.def Info: Figures/BayesianEvolution.png used on input line 468.
1746 1746
(pdftex.def) Requested size: 313.16922pt x 201.25137pt. 1747 1747 (pdftex.def) Requested size: 313.16922pt x 201.25137pt.
[68 <./Figures/agent.png>] 1748 1748 [68 <./Figures/agent.png>]
Underfull \hbox (badness 10000) in paragraph at lines 509--509 1749 1749 Underfull \hbox (badness 10000) in paragraph at lines 509--509
[]|\T1/phv/m/n/8 Input. 1750 1750 []|\T1/phv/m/n/8 Input.
[] 1751 1751 []
1752 1752
1753 1753
Underfull \hbox (badness 10000) in paragraph at lines 509--510 1754 1754 Underfull \hbox (badness 10000) in paragraph at lines 509--510
[]|\T1/phv/m/n/8 Output 1755 1755 []|\T1/phv/m/n/8 Output
[] 1756 1756 []
1757 1757
<Figures/boxplot2.png, id=1237, 1615.03375pt x 835.12pt> 1758 1758 <Figures/boxplot2.png, id=1237, 1615.03375pt x 835.12pt>
File: Figures/boxplot2.png Graphic file (type png) 1759 1759 File: Figures/boxplot2.png Graphic file (type png)
<use Figures/boxplot2.png> 1760 1760 <use Figures/boxplot2.png>
Package pdftex.def Info: Figures/boxplot2.png used on input line 619. 1761 1761 Package pdftex.def Info: Figures/boxplot2.png used on input line 619.
(pdftex.def) Requested size: 427.43153pt x 221.01265pt. 1762 1762 (pdftex.def) Requested size: 427.43153pt x 221.01265pt.
1763 1763
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1764 1764 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1765 1765 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 69 1766 1766 S ESCBR \T1/phv/m/n/10.95 69
[] 1767 1767 []
1768 1768
[69 <./Figures/BayesianEvolution.png>] 1769 1769 [69 <./Figures/BayesianEvolution.png>]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1770 1770 Underfull \vbox (badness 10000) has occurred while \output is active []
1771 1771
[70] 1772 1772 [70]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1773 1773 Underfull \vbox (badness 10000) has occurred while \output is active []
1774 1774
1775 1775
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1776 1776 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1777 1777 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 71 1778 1778 S ESCBR \T1/phv/m/n/10.95 71
[] 1779 1779 []
1780 1780
[71 <./Figures/boxplot2.png>]) [72] 1781 1781 [71 <./Figures/boxplot2.png>]) [72]
\openout2 = `./chapters/TS.aux'. 1782 1782 \openout2 = `./chapters/TS.aux'.
1783 1783
(./chapters/TS.tex 1784 1784 (./chapters/TS.tex
Chapitre 7. 1785 1785 Chapitre 7.
1786 1786
Underfull \vbox (badness 10000) has occurred while \output is active [] 1787 1787 Underfull \vbox (badness 10000) has occurred while \output is active []
1788 1788
[73 1789 1789 [73
1790 1790
1791 1791
1792 1792
1793 1793
] 1794 1794 ]
Overfull \hbox (19.02232pt too wide) in paragraph at lines 33--59 1795 1795 Overfull \hbox (19.02232pt too wide) in paragraph at lines 33--59
[][] 1796 1796 [][]
[] 1797 1797 []

No preview for this file type

main.synctex.gz View file @ 05180ce

No preview for this file type