Commit 2a133b0ce0cea4bbc0db077c67eb74dc771ca52b

Authored by dsotofor
1 parent 59925d2ecb
Exists in main

version all ok, corrections to do in general conclusion...

Showing 9 changed files with 68 additions and 50 deletions Inline Diff

chapters/TS.aux View file @ 2a133b0
\relax 1 1 \relax
\providecommand\hyper@newdestlabel[2]{} 2 2 \providecommand\hyper@newdestlabel[2]{}
\citation{Liu2023} 3 3 \citation{Liu2023}
\citation{MUANGPRATHUB2020e05227} 4 4 \citation{MUANGPRATHUB2020e05227}
\citation{9870279} 5 5 \citation{9870279}
\citation{Soto2} 6 6 \citation{Soto2}
\@writefile{toc}{\contentsline {chapter}{\numberline {7}Système de Recommandation dans AI-VT}{73}{chapter.7}\protected@file@percent } 7 7 \@writefile{toc}{\contentsline {chapter}{\numberline {7}Système de Recommandation dans AI-VT}{73}{chapter.7}\protected@file@percent }
\@writefile{lof}{\addvspace {10\p@ }} 8 8 \@writefile{lof}{\addvspace {10\p@ }}
\@writefile{lot}{\addvspace {10\p@ }} 9 9 \@writefile{lot}{\addvspace {10\p@ }}
\@writefile{toc}{\contentsline {section}{\numberline {7.1}Introduction}{73}{section.7.1}\protected@file@percent } 10 10 \@writefile{toc}{\contentsline {section}{\numberline {7.1}Introduction}{73}{section.7.1}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {7.2}Système de recommandation stochastique fondé sur l'échantillonnage de Thompson}{74}{section.7.2}\protected@file@percent } 11 11 \@writefile{toc}{\contentsline {section}{\numberline {7.2}Système de recommandation stochastique fondé sur l'échantillonnage de Thompson}{74}{section.7.2}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.1}Algorithme Proposé}{74}{subsection.7.2.1}\protected@file@percent } 12 12 \@writefile{toc}{\contentsline {subsection}{\numberline {7.2.1}Algorithme Proposé}{74}{subsection.7.2.1}\protected@file@percent }
\newlabel{eqBeta}{{7.1}{74}{Algorithme Proposé}{equation.7.2.1}{}} 13 13 \newlabel{eqBeta}{{7.1}{74}{Algorithme Proposé}{equation.7.2.1}{}}
\citation{Arthurs} 14 14 \citation{Arthurs}
\@writefile{lot}{\contentsline {table}{\numberline {7.1}{\ignorespaces Variables et paramètres du système de recommandation proposé\relax }}{75}{table.caption.45}\protected@file@percent } 15 15 \@writefile{lot}{\contentsline {table}{\numberline {7.1}{\ignorespaces Variables et paramètres du système de recommandation proposé\relax }}{75}{table.caption.45}\protected@file@percent }
\newlabel{tabPar}{{7.1}{75}{Variables et paramètres du système de recommandation proposé\relax }{table.caption.45}{}} 16 16 \newlabel{tabPar}{{7.1}{75}{Variables et paramètres du système de recommandation proposé\relax }{table.caption.45}{}}
\newlabel{eqsGT}{{7.2}{75}{Algorithme Proposé}{equation.7.2.2}{}} 17 17 \newlabel{eqsGT}{{7.2}{75}{Algorithme Proposé}{equation.7.2.2}{}}
\newlabel{eqgtc}{{7.3}{75}{Algorithme Proposé}{equation.7.2.3}{}} 18 18 \newlabel{eqgtc}{{7.3}{75}{Algorithme Proposé}{equation.7.2.3}{}}
\newlabel{eqltc}{{7.4}{75}{Algorithme Proposé}{equation.7.2.4}{}} 19 19 \newlabel{eqltc}{{7.4}{75}{Algorithme Proposé}{equation.7.2.4}{}}
\@writefile{loa}{\contentsline {algorithm}{\numberline {1}{\ignorespaces Algorithme de recommandation stochastique\relax }}{76}{algorithm.1}\protected@file@percent } 20 20 \@writefile{loa}{\contentsline {algorithm}{\numberline {1}{\ignorespaces Algorithme de recommandation stochastique\relax }}{76}{algorithm.1}\protected@file@percent }
\newlabel{alg2}{{1}{76}{Algorithme de recommandation stochastique\relax }{algorithm.1}{}} 21 21 \newlabel{alg2}{{1}{76}{Algorithme de recommandation stochastique\relax }{algorithm.1}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.2}Résultats}{76}{subsection.7.2.2}\protected@file@percent } 22 22 \@writefile{toc}{\contentsline {subsection}{\numberline {7.2.2}Résultats}{76}{subsection.7.2.2}\protected@file@percent }
\@writefile{lot}{\contentsline {table}{\numberline {7.2}{\ignorespaces Description des données utilisées pour l'évaluation.\relax }}{76}{table.caption.47}\protected@file@percent } 23 23 \@writefile{lot}{\contentsline {table}{\numberline {7.2}{\ignorespaces Description des données utilisées pour l'évaluation.\relax }}{76}{table.caption.47}\protected@file@percent }
\newlabel{tabDataSet}{{7.2}{76}{Description des données utilisées pour l'évaluation.\relax }{table.caption.47}{}} 24 24 \newlabel{tabDataSet}{{7.2}{76}{Description des données utilisées pour l'évaluation.\relax }{table.caption.47}{}}
\@writefile{lot}{\contentsline {table}{\numberline {7.3}{\ignorespaces Valeurs des paramètres pour les scénarios évalués\relax }}{76}{table.caption.48}\protected@file@percent } 25 25 \@writefile{lot}{\contentsline {table}{\numberline {7.3}{\ignorespaces Valeurs des paramètres pour les scénarios évalués\relax }}{76}{table.caption.48}\protected@file@percent }
\newlabel{tabgm1}{{7.3}{76}{Valeurs des paramètres pour les scénarios évalués\relax }{table.caption.48}{}} 26 26 \newlabel{tabgm1}{{7.3}{76}{Valeurs des paramètres pour les scénarios évalués\relax }{table.caption.48}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.1}{\ignorespaces Répartition des notes générées selon le niveau de complexité.\relax }}{77}{figure.caption.46}\protected@file@percent } 27 27 \@writefile{lof}{\contentsline {figure}{\numberline {7.1}{\ignorespaces Répartition des notes générées selon le niveau de complexité.\relax }}{77}{figure.caption.46}\protected@file@percent }
\newlabel{figData}{{7.1}{77}{Répartition des notes générées selon le niveau de complexité.\relax }{figure.caption.46}{}} 28 28 \newlabel{figData}{{7.1}{77}{Répartition des notes générées selon le niveau de complexité.\relax }{figure.caption.46}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.2}{\ignorespaces Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la première séance avec un démarrage à froid (sans données initiales sur les apprenants). Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }}{78}{figure.caption.49}\protected@file@percent } 29 29 \@writefile{lof}{\contentsline {figure}{\numberline {7.2}{\ignorespaces Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la première séance avec un démarrage à froid (sans données initiales sur les apprenants). Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }}{78}{figure.caption.49}\protected@file@percent }
\newlabel{figCmp2}{{7.2}{78}{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la première séance avec un démarrage à froid (sans données initiales sur les apprenants). Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }{figure.caption.49}{}} 30 30 \newlabel{figCmp2}{{7.2}{78}{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la première séance avec un démarrage à froid (sans données initiales sur les apprenants). Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }{figure.caption.49}{}}
\newlabel{eqMetric1}{{7.5}{78}{Résultats}{equation.7.2.5}{}} 31 31 \newlabel{eqMetric1}{{7.5}{78}{Résultats}{equation.7.2.5}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.3}{\ignorespaces Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la deuxième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }}{79}{figure.caption.50}\protected@file@percent } 32 32 \@writefile{lof}{\contentsline {figure}{\numberline {7.3}{\ignorespaces Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la deuxième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }}{79}{figure.caption.50}\protected@file@percent }
\newlabel{figCmp3}{{7.3}{79}{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la deuxième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }{figure.caption.50}{}} 33 33 \newlabel{figCmp3}{{7.3}{79}{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la deuxième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }{figure.caption.50}{}}
\newlabel{eqMetric2}{{7.6}{79}{Résultats}{equation.7.2.6}{}} 34 34 \newlabel{eqMetric2}{{7.6}{79}{Résultats}{equation.7.2.6}{}}
\newlabel{eqXc}{{7.7}{79}{Résultats}{equation.7.2.7}{}} 35 35 \newlabel{eqXc}{{7.7}{79}{Résultats}{equation.7.2.7}{}}
\newlabel{eqYc}{{7.8}{79}{Résultats}{equation.7.2.8}{}} 36 36 \newlabel{eqYc}{{7.8}{79}{Résultats}{equation.7.2.8}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.4}{\ignorespaces Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la troisième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }}{80}{figure.caption.51}\protected@file@percent } 37 37 \@writefile{lof}{\contentsline {figure}{\numberline {7.4}{\ignorespaces Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la troisième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }}{80}{figure.caption.51}\protected@file@percent }
\newlabel{figCmp4}{{7.4}{80}{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la troisième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }{figure.caption.51}{}} 38 38 \newlabel{figCmp4}{{7.4}{80}{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la troisième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)\relax }{figure.caption.51}{}}
\@writefile{lot}{\contentsline {table}{\numberline {7.4}{\ignorespaces Résultats de la métrique $rp_c(x)$ (RàPC - Système sans module de recommandation, DM - Module de recommandation déterministe, SM - Module de recommandation stochastique)\relax }}{80}{table.caption.53}\protected@file@percent } 39 39 \@writefile{lot}{\contentsline {table}{\numberline {7.4}{\ignorespaces Résultats de la métrique $rp_c(x)$ (RàPC - Système sans module de recommandation, DM - Module de recommandation déterministe, SM - Module de recommandation stochastique)\relax }}{80}{table.caption.53}\protected@file@percent }
\newlabel{tabRM}{{7.4}{80}{Résultats de la métrique $rp_c(x)$ (RàPC - Système sans module de recommandation, DM - Module de recommandation déterministe, SM - Module de recommandation stochastique)\relax }{table.caption.53}{}} 40 40 \newlabel{tabRM}{{7.4}{80}{Résultats de la métrique $rp_c(x)$ (RàPC - Système sans module de recommandation, DM - Module de recommandation déterministe, SM - Module de recommandation stochastique)\relax }{table.caption.53}{}}
\newlabel{eqMetricS1}{{7.9}{80}{Résultats}{equation.7.2.9}{}} 41 41 \newlabel{eqMetricS1}{{7.9}{80}{Résultats}{equation.7.2.9}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.5}{\ignorespaces Fonction d'évaluation de la qualité de la recommandation pour un parcours standard\relax }}{81}{figure.caption.52}\protected@file@percent } 42 42 \@writefile{lof}{\contentsline {figure}{\numberline {7.5}{\ignorespaces Fonction d'évaluation de la qualité de la recommandation pour un parcours standard\relax }}{81}{figure.caption.52}\protected@file@percent }
\newlabel{figMetric}{{7.5}{81}{Fonction d'évaluation de la qualité de la recommandation pour un parcours standard\relax }{figure.caption.52}{}} 43 43 \newlabel{figMetric}{{7.5}{81}{Fonction d'évaluation de la qualité de la recommandation pour un parcours standard\relax }{figure.caption.52}{}}
\newlabel{eqMetricS2}{{7.10}{81}{Résultats}{equation.7.2.10}{}} 44 44 \newlabel{eqMetricS2}{{7.10}{81}{Résultats}{equation.7.2.10}{}}
\newlabel{eqCS}{{7.11}{81}{Résultats}{equation.7.2.11}{}} 45 45 \newlabel{eqCS}{{7.11}{81}{Résultats}{equation.7.2.11}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.2.3}Discussion et Conclusion}{81}{subsection.7.2.3}\protected@file@percent } 46 46 \@writefile{toc}{\contentsline {subsection}{\numberline {7.2.3}Discussion et Conclusion}{81}{subsection.7.2.3}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.6}{\ignorespaces Fonction d'évaluation de la qualité de la recommandation pour un apprentissage progressif.\relax }}{82}{figure.caption.54}\protected@file@percent } 47 47 \@writefile{lof}{\contentsline {figure}{\numberline {7.6}{\ignorespaces Fonction d'évaluation de la qualité de la recommandation pour un apprentissage progressif.\relax }}{82}{figure.caption.54}\protected@file@percent }
\newlabel{figMetric2}{{7.6}{82}{Fonction d'évaluation de la qualité de la recommandation pour un apprentissage progressif.\relax }{figure.caption.54}{}} 48 48 \newlabel{figMetric2}{{7.6}{82}{Fonction d'évaluation de la qualité de la recommandation pour un apprentissage progressif.\relax }{figure.caption.54}{}}
\@writefile{lot}{\contentsline {table}{\numberline {7.5}{\ignorespaces Évaluation des recommandations proposées selon $rs_c(x)$ par les différents systèmes de recommandation testés : RàPC - Système sans module de recommandation, DM - Algorithme deterministique, SM - Algorithme stochastique\relax }}{82}{table.caption.55}\protected@file@percent } 49 49 \@writefile{lot}{\contentsline {table}{\numberline {7.5}{\ignorespaces Évaluation des recommandations proposées selon $rs_c(x)$ par les différents systèmes de recommandation testés : RàPC - Système sans module de recommandation, DM - Algorithme deterministique, SM - Algorithme stochastique\relax }}{82}{table.caption.55}\protected@file@percent }
\newlabel{tabRM2}{{7.5}{82}{Évaluation des recommandations proposées selon $rs_c(x)$ par les différents systèmes de recommandation testés : RàPC - Système sans module de recommandation, DM - Algorithme deterministique, SM - Algorithme stochastique\relax }{table.caption.55}{}} 50 50 \newlabel{tabRM2}{{7.5}{82}{Évaluation des recommandations proposées selon $rs_c(x)$ par les différents systèmes de recommandation testés : RàPC - Système sans module de recommandation, DM - Algorithme deterministique, SM - Algorithme stochastique\relax }{table.caption.55}{}}
\@writefile{lot}{\contentsline {table}{\numberline {7.6}{\ignorespaces Moyenne de la diversité des propositions pour tous les apprenants. Une valeur plus faible représente une plus grande diversité. (RàPC - Système sans module de recommandation, DM - Module deterministe, SM - Module stochastique)\relax }}{82}{table.caption.56}\protected@file@percent } 51 51 \@writefile{lot}{\contentsline {table}{\numberline {7.6}{\ignorespaces Moyenne de la diversité des propositions pour tous les apprenants. Une valeur plus faible représente une plus grande diversité. (RàPC - Système sans module de recommandation, DM - Module deterministe, SM - Module stochastique)\relax }}{82}{table.caption.56}\protected@file@percent }
\newlabel{tabCS}{{7.6}{82}{Moyenne de la diversité des propositions pour tous les apprenants. Une valeur plus faible représente une plus grande diversité. (RàPC - Système sans module de recommandation, DM - Module deterministe, SM - Module stochastique)\relax }{table.caption.56}{}} 52 52 \newlabel{tabCS}{{7.6}{82}{Moyenne de la diversité des propositions pour tous les apprenants. Une valeur plus faible représente une plus grande diversité. (RàPC - Système sans module de recommandation, DM - Module deterministe, SM - Module stochastique)\relax }{table.caption.56}{}}
\@writefile{toc}{\contentsline {section}{\numberline {7.3}ESCBR-SMA et échantillonnage de Thompson}{83}{section.7.3}\protected@file@percent } 53 53 \@writefile{toc}{\contentsline {section}{\numberline {7.3}ESCBR-SMA et échantillonnage de Thompson}{83}{section.7.3}\protected@file@percent }
\citation{jmse11050890} 54 54 \citation{jmse11050890}
\citation{ZHANG2018189} 55 55 \citation{ZHANG2018189}
\citation{NEURIPS2023_9d8cf124} 56 56 \citation{NEURIPS2023_9d8cf124}
\citation{pmlr-v238-ou24a} 57 57 \citation{pmlr-v238-ou24a}
\citation{math12111758} 58 58 \citation{math12111758}
\citation{NGUYEN2024111566} 59 59 \citation{NGUYEN2024111566}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.1}Concepts Associés}{84}{subsection.7.3.1}\protected@file@percent } 60 60 \@writefile{toc}{\contentsline {subsection}{\numberline {7.3.1}Concepts Associés}{84}{subsection.7.3.1}\protected@file@percent }
\newlabel{eqbkt1}{{7.12}{84}{Concepts Associés}{equation.7.3.12}{}} 61 61 \newlabel{eqbkt1}{{7.12}{84}{Concepts Associés}{equation.7.3.12}{}}
\newlabel{eqbkt2}{{7.13}{84}{Concepts Associés}{equation.7.3.13}{}} 62 62 \newlabel{eqbkt2}{{7.13}{84}{Concepts Associés}{equation.7.3.13}{}}
\newlabel{eqbkt3}{{7.14}{84}{Concepts Associés}{equation.7.3.14}{}} 63 63 \newlabel{eqbkt3}{{7.14}{84}{Concepts Associés}{equation.7.3.14}{}}
\citation{Li_2024} 64 64 \citation{Li_2024}
\newlabel{fbeta}{{7.15}{85}{Concepts Associés}{equation.7.3.15}{}} 65 65 \newlabel{fbeta}{{7.15}{85}{Concepts Associés}{equation.7.3.15}{}}
\newlabel{eqGamma1}{{7.16}{85}{Concepts Associés}{equation.7.3.16}{}} 66 66 \newlabel{eqGamma1}{{7.16}{85}{Concepts Associés}{equation.7.3.16}{}}
\newlabel{f2beta}{{7.17}{85}{Concepts Associés}{equation.7.3.17}{}} 67 67 \newlabel{f2beta}{{7.17}{85}{Concepts Associés}{equation.7.3.17}{}}
\newlabel{f3Beta}{{7.18}{85}{Concepts Associés}{equation.7.3.18}{}} 68 68 \newlabel{f3Beta}{{7.18}{85}{Concepts Associés}{equation.7.3.18}{}}
\newlabel{eqJac}{{7.19}{85}{Concepts Associés}{equation.7.3.19}{}} 69 69 \newlabel{eqJac}{{7.19}{85}{Concepts Associés}{equation.7.3.19}{}}
\newlabel{f4Beta}{{7.20}{85}{Concepts Associés}{equation.7.3.20}{}} 70 70 \newlabel{f4Beta}{{7.20}{85}{Concepts Associés}{equation.7.3.20}{}}
\newlabel{f5Beta}{{7.21}{85}{Concepts Associés}{equation.7.3.21}{}} 71 71 \newlabel{f5Beta}{{7.21}{85}{Concepts Associés}{equation.7.3.21}{}}
\newlabel{f6Beta}{{7.22}{85}{Concepts Associés}{equation.7.3.22}{}} 72 72 \newlabel{f6Beta}{{7.22}{85}{Concepts Associés}{equation.7.3.22}{}}
\newlabel{f7Beta}{{7.23}{85}{Concepts Associés}{equation.7.3.23}{}} 73 73 \newlabel{f7Beta}{{7.23}{85}{Concepts Associés}{equation.7.3.23}{}}
\citation{Kim2024} 74 74 \citation{Kim2024}
\citation{10.1145/3578337.3605122} 75 75 \citation{10.1145/3578337.3605122}
\citation{lei2024analysis} 76 76 \citation{lei2024analysis}
\newlabel{dkl}{{7.24}{86}{Concepts Associés}{equation.7.3.24}{}} 77 77 \newlabel{dkl}{{7.24}{86}{Concepts Associés}{equation.7.3.24}{}}
\newlabel{djs}{{7.25}{86}{Concepts Associés}{equation.7.3.25}{}} 78 78 \newlabel{djs}{{7.25}{86}{Concepts Associés}{equation.7.3.25}{}}
\newlabel{djs2}{{7.26}{86}{Concepts Associés}{equation.7.3.26}{}} 79 79 \newlabel{djs2}{{7.26}{86}{Concepts Associés}{equation.7.3.26}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.2}Algorithme Proposé}{86}{subsection.7.3.2}\protected@file@percent } 80 80 \@writefile{toc}{\contentsline {subsection}{\numberline {7.3.2}Algorithme Proposé}{86}{subsection.7.3.2}\protected@file@percent }
\newlabel{Sec:TS-ESCBR-SMA}{{7.3.2}{86}{Algorithme Proposé}{subsection.7.3.2}{}} 81 81 \newlabel{Sec:TS-ESCBR-SMA}{{7.3.2}{86}{Algorithme Proposé}{subsection.7.3.2}{}}
\citation{badier:hal-04092828} 82 82 \citation{badier:hal-04092828}
\@writefile{lof}{\contentsline {figure}{\numberline {7.7}{\ignorespaces Schéma de l'architecture de l'algorithme proposé\relax }}{87}{figure.caption.57}\protected@file@percent } 83 83 \@writefile{lof}{\contentsline {figure}{\numberline {7.7}{\ignorespaces Schéma de l'architecture de l'algorithme proposé\relax }}{87}{figure.caption.57}\protected@file@percent }
\newlabel{fig:Amodel}{{7.7}{87}{Schéma de l'architecture de l'algorithme proposé\relax }{figure.caption.57}{}} 84 84 \newlabel{fig:Amodel}{{7.7}{87}{Schéma de l'architecture de l'algorithme proposé\relax }{figure.caption.57}{}}
\newlabel{IntEq1_}{{7.27}{87}{Algorithme Proposé}{equation.7.3.27}{}} 85 85 \newlabel{IntEq1_}{{7.27}{87}{Algorithme Proposé}{equation.7.3.27}{}}
\newlabel{IntEq2_}{{7.28}{87}{Algorithme Proposé}{equation.7.3.28}{}} 86 86 \newlabel{IntEq2_}{{7.28}{87}{Algorithme Proposé}{equation.7.3.28}{}}
\newlabel{eqMixModels_}{{7.29}{87}{Algorithme Proposé}{equation.7.3.29}{}} 87 87 \newlabel{eqMixModels_}{{7.29}{87}{Algorithme Proposé}{equation.7.3.29}{}}
88 \citation{Data}
\citation{doi:10.1137/23M1592420} 88 89 \citation{doi:10.1137/23M1592420}
\@writefile{lot}{\contentsline {table}{\numberline {7.7}{\ignorespaces Paramètres (p), variables (v) et fonctions (f) de l'algorithme proposé et des métriques utilisées\relax }}{88}{table.caption.58}\protected@file@percent } 89 90 \@writefile{lot}{\contentsline {table}{\numberline {7.7}{\ignorespaces Paramètres (p), variables (v) et fonctions (f) de l'algorithme proposé et des métriques utilisées\relax }}{88}{table.caption.58}\protected@file@percent }
\newlabel{tabvp}{{7.7}{88}{Paramètres (p), variables (v) et fonctions (f) de l'algorithme proposé et des métriques utilisées\relax }{table.caption.58}{}} 90 91 \newlabel{tabvp}{{7.7}{88}{Paramètres (p), variables (v) et fonctions (f) de l'algorithme proposé et des métriques utilisées\relax }{table.caption.58}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.3}Résultats et Discussion}{88}{subsection.7.3.3}\protected@file@percent } 91 92 \@writefile{toc}{\contentsline {subsection}{\numberline {7.3.3}Résultats et Discussion}{88}{subsection.7.3.3}\protected@file@percent }
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.1}Régression avec ESCBR-SMA pour l'aide à l'apprentissage humain}{88}{subsubsection.7.3.3.1}\protected@file@percent } 92 93 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.1}Régression avec ESCBR-SMA pour l'aide à l'apprentissage humain}{88}{subsubsection.7.3.3.1}\protected@file@percent }
\@writefile{lot}{\contentsline {table}{\numberline {7.8}{\ignorespaces Description des scénarios\relax }}{89}{table.caption.59}\protected@file@percent } 93 94 \@writefile{lot}{\contentsline {table}{\numberline {7.8}{\ignorespaces Description des scénarios\relax }}{89}{table.caption.59}\protected@file@percent }
\newlabel{tab:scenarios}{{7.8}{89}{Description des scénarios\relax }{table.caption.59}{}} 94 95 \newlabel{tab:scenarios}{{7.8}{89}{Description des scénarios\relax }{table.caption.59}{}}
\@writefile{lot}{\contentsline {table}{\numberline {7.9}{\ignorespaces Liste des algorithmes évalués \relax }}{89}{table.caption.60}\protected@file@percent } 95 96 \@writefile{lot}{\contentsline {table}{\numberline {7.9}{\ignorespaces Liste des algorithmes évalués \relax }}{89}{table.caption.60}\protected@file@percent }
\newlabel{tabAlgs}{{7.9}{89}{Liste des algorithmes évalués \relax }{table.caption.60}{}} 96 97 \newlabel{tabAlgs}{{7.9}{89}{Liste des algorithmes évalués \relax }{table.caption.60}{}}
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.2}Progression des connaissances}{89}{subsubsection.7.3.3.2}\protected@file@percent } 97 98 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.2}Progression des connaissances}{89}{subsubsection.7.3.3.2}\protected@file@percent }
\citation{Kuzilek2017} 98 99 \citation{Kuzilek2017}
100 \citation{Data}
\@writefile{lot}{\contentsline {table}{\numberline {7.10}{\ignorespaces Erreurs moyennes et médianes des interpolations des 10 algorithmes sélectionnés sur les 4 scénarios considérés et obtenues après 100 exécutions.\relax }}{90}{table.caption.61}\protected@file@percent } 99 101 \@writefile{lot}{\contentsline {table}{\numberline {7.10}{\ignorespaces Erreurs moyennes et médianes des interpolations des 10 algorithmes sélectionnés sur les 4 scénarios considérés et obtenues après 100 exécutions.\relax }}{90}{table.caption.61}\protected@file@percent }
\newlabel{tab:results}{{7.10}{90}{Erreurs moyennes et médianes des interpolations des 10 algorithmes sélectionnés sur les 4 scénarios considérés et obtenues après 100 exécutions.\relax }{table.caption.61}{}} 100 102 \newlabel{tab:results}{{7.10}{90}{Erreurs moyennes et médianes des interpolations des 10 algorithmes sélectionnés sur les 4 scénarios considérés et obtenues après 100 exécutions.\relax }{table.caption.61}{}}
\newlabel{eqprog1}{{7.30}{90}{Progression des connaissances}{equation.7.3.30}{}} 101 103 \newlabel{eqprog1}{{7.30}{90}{Progression des connaissances}{equation.7.3.30}{}}
\newlabel{eqprog2}{{7.31}{90}{Progression des connaissances}{equation.7.3.31}{}} 102 104 \newlabel{eqprog2}{{7.31}{90}{Progression des connaissances}{equation.7.3.31}{}}
\newlabel{eqVarP}{{7.32}{90}{Progression des connaissances}{equation.7.3.32}{}} 103 105 \newlabel{eqVarP}{{7.32}{90}{Progression des connaissances}{equation.7.3.32}{}}
\newlabel{eqTEK}{{7.33}{90}{Progression des connaissances}{equation.7.3.33}{}} 104 106 \newlabel{eqTEK}{{7.33}{90}{Progression des connaissances}{equation.7.3.33}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.8}{\ignorespaces Progression des connaissances avec l'échantillonnage de Thompson selon la divergence de Jensen-Shannon\relax }}{91}{figure.caption.62}\protected@file@percent } 105 107 \@writefile{lof}{\contentsline {figure}{\numberline {7.8}{\ignorespaces Progression des connaissances avec l'échantillonnage de Thompson selon la divergence de Jensen-Shannon\relax }}{91}{figure.caption.62}\protected@file@percent }
\newlabel{fig:evolution}{{7.8}{91}{Progression des connaissances avec l'échantillonnage de Thompson selon la divergence de Jensen-Shannon\relax }{figure.caption.62}{}} 106 108 \newlabel{fig:evolution}{{7.8}{91}{Progression des connaissances avec l'échantillonnage de Thompson selon la divergence de Jensen-Shannon\relax }{figure.caption.62}{}}
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.3}Système de recommandation avec un jeu de données d'étudiants réels}{91}{subsubsection.7.3.3.3}\protected@file@percent } 107 109 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.3}Système de recommandation avec un jeu de données d'étudiants réels}{91}{subsubsection.7.3.3.3}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.9}{\ignorespaces Nombre de recommandations par niveau de complexité\relax }}{92}{figure.caption.63}\protected@file@percent } 108 110 \@writefile{lof}{\contentsline {figure}{\numberline {7.9}{\ignorespaces Nombre de recommandations par niveau de complexité\relax }}{92}{figure.caption.63}\protected@file@percent }
\newlabel{fig:stabilityBP}{{7.9}{92}{Nombre de recommandations par niveau de complexité\relax }{figure.caption.63}{}} 109 111 \newlabel{fig:stabilityBP}{{7.9}{92}{Nombre de recommandations par niveau de complexité\relax }{figure.caption.63}{}}
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.4}Comparaison entre TS et BKT}{92}{subsubsection.7.3.3.4}\protected@file@percent } 110 112 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.4}Comparaison entre TS et BKT}{92}{subsubsection.7.3.3.4}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.10}{\ignorespaces Précision de la recommandation\relax }}{93}{figure.caption.64}\protected@file@percent } 111 113 \@writefile{lof}{\contentsline {figure}{\numberline {7.10}{\ignorespaces Précision de la recommandation\relax }}{93}{figure.caption.64}\protected@file@percent }
\newlabel{fig:precision}{{7.10}{93}{Précision de la recommandation\relax }{figure.caption.64}{}} 112 114 \newlabel{fig:precision}{{7.10}{93}{Précision de la recommandation\relax }{figure.caption.64}{}}
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.5}Système de recommandation avec ESCBR-SMA}{93}{subsubsection.7.3.3.5}\protected@file@percent } 113 115 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.5}Système de recommandation avec ESCBR-SMA}{93}{subsubsection.7.3.3.5}\protected@file@percent }
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.6}Progression des connaissances TS vs TS et ESCBR-SMA}{93}{subsubsection.7.3.3.6}\protected@file@percent } 114 116 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.3.3.6}Progression des connaissances TS vs TS et ESCBR-SMA}{93}{subsubsection.7.3.3.6}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.11}{\ignorespaces Comparaison de l'évolution des notes entre les systèmes fondés sur TS et BKT.\relax }}{94}{figure.caption.65}\protected@file@percent } 115 117 \@writefile{lof}{\contentsline {figure}{\numberline {7.11}{\ignorespaces Comparaison de l'évolution des notes entre les systèmes fondés sur TS et BKT.\relax }}{94}{figure.caption.65}\protected@file@percent }
\newlabel{fig:EvGrades}{{7.11}{94}{Comparaison de l'évolution des notes entre les systèmes fondés sur TS et BKT.\relax }{figure.caption.65}{}} 116 118 \newlabel{fig:EvGrades}{{7.11}{94}{Comparaison de l'évolution des notes entre les systèmes fondés sur TS et BKT.\relax }{figure.caption.65}{}}
\newlabel{eqjs4}{{7.34}{94}{Progression des connaissances TS vs TS et ESCBR-SMA}{equation.7.3.34}{}} 117 119 \newlabel{eqjs4}{{7.34}{94}{Progression des connaissances TS vs TS et ESCBR-SMA}{equation.7.3.34}{}}
\newlabel{eqjs5}{{7.35}{94}{Progression des connaissances TS vs TS et ESCBR-SMA}{equation.7.3.35}{}} 118 120 \newlabel{eqjs5}{{7.35}{94}{Progression des connaissances TS vs TS et ESCBR-SMA}{equation.7.3.35}{}}
\@writefile{lof}{\contentsline {figure}{\numberline {7.12}{\ignorespaces Comparaison de l'évolution des niveaux entre les systèmes de recommandation fondés sur BKT et TS\relax }}{95}{figure.caption.66}\protected@file@percent } 119 121 \@writefile{lof}{\contentsline {figure}{\numberline {7.12}{\ignorespaces Comparaison de l'évolution des niveaux entre les systèmes de recommandation fondés sur BKT et TS\relax }}{95}{figure.caption.66}\protected@file@percent }
\newlabel{fig:EvCL}{{7.12}{95}{Comparaison de l'évolution des niveaux entre les systèmes de recommandation fondés sur BKT et TS\relax }{figure.caption.66}{}} 120 122 \newlabel{fig:EvCL}{{7.12}{95}{Comparaison de l'évolution des niveaux entre les systèmes de recommandation fondés sur BKT et TS\relax }{figure.caption.66}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.3.4}Conclusion}{95}{subsection.7.3.4}\protected@file@percent } 121 123 \@writefile{toc}{\contentsline {subsection}{\numberline {7.3.4}Conclusion}{95}{subsection.7.3.4}\protected@file@percent }
\citation{10.1145/3578337.3605122} 122 124 \citation{10.1145/3578337.3605122}
\@writefile{lof}{\contentsline {figure}{\numberline {7.13}{\ignorespaces Différence normalisée entre la progression avec échantillonnage de Thompson seul et échantillonnage de Thompson aassocié à ESCBR-SMA pour 1000 apprenants\relax }}{96}{figure.caption.67}\protected@file@percent } 123 125 \@writefile{lof}{\contentsline {figure}{\numberline {7.13}{\ignorespaces Différence normalisée entre la progression avec échantillonnage de Thompson seul et échantillonnage de Thompson aassocié à ESCBR-SMA pour 1000 apprenants\relax }}{96}{figure.caption.67}\protected@file@percent }
\newlabel{fig_cmp2}{{7.13}{96}{Différence normalisée entre la progression avec échantillonnage de Thompson seul et échantillonnage de Thompson aassocié à ESCBR-SMA pour 1000 apprenants\relax }{figure.caption.67}{}} 124 126 \newlabel{fig_cmp2}{{7.13}{96}{Différence normalisée entre la progression avec échantillonnage de Thompson seul et échantillonnage de Thompson aassocié à ESCBR-SMA pour 1000 apprenants\relax }{figure.caption.67}{}}
\@writefile{toc}{\contentsline {section}{\numberline {7.4}ESCBR-SMA, échantillonnage de Thompson et processus de Hawkes}{96}{section.7.4}\protected@file@percent } 125 127 \@writefile{toc}{\contentsline {section}{\numberline {7.4}ESCBR-SMA, échantillonnage de Thompson et processus de Hawkes}{96}{section.7.4}\protected@file@percent }
\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.1}Algorithme Proposé}{96}{subsection.7.4.1}\protected@file@percent } 126 128 \@writefile{toc}{\contentsline {subsection}{\numberline {7.4.1}Algorithme Proposé}{96}{subsection.7.4.1}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.14}{\ignorespaces Organisation des modules TS, ESCBR-SMA et processus de Hawkes.\relax }}{97}{figure.caption.68}\protected@file@percent } 127 129 \@writefile{lof}{\contentsline {figure}{\numberline {7.14}{\ignorespaces Organisation des modules TS, ESCBR-SMA et processus de Hawkes.\relax }}{97}{figure.caption.68}\protected@file@percent }
\newlabel{fig:Amodel}{{7.14}{97}{Organisation des modules TS, ESCBR-SMA et processus de Hawkes.\relax }{figure.caption.68}{}} 128 130 \newlabel{fig:Amodel}{{7.14}{97}{Organisation des modules TS, ESCBR-SMA et processus de Hawkes.\relax }{figure.caption.68}{}}
\newlabel{hp1}{{7.36}{97}{Algorithme Proposé}{equation.7.4.36}{}} 129 131 \newlabel{hp1}{{7.36}{97}{Algorithme Proposé}{equation.7.4.36}{}}
\newlabel{hp21}{{7.37}{97}{Algorithme Proposé}{equation.7.4.37}{}} 130 132 \newlabel{hp21}{{7.37}{97}{Algorithme Proposé}{equation.7.4.37}{}}
\citation{Kuzilek2017} 131 133 \citation{Kuzilek2017}
134 \citation{Data}
\newlabel{hp22}{{7.38}{98}{Algorithme Proposé}{equation.7.4.38}{}} 132 135 \newlabel{hp22}{{7.38}{98}{Algorithme Proposé}{equation.7.4.38}{}}
\newlabel{hp30}{{7.39}{98}{Algorithme Proposé}{equation.7.4.39}{}} 133 136 \newlabel{hp30}{{7.39}{98}{Algorithme Proposé}{equation.7.4.39}{}}
\newlabel{hp31}{{7.40}{98}{Algorithme Proposé}{equation.7.4.40}{}} 134 137 \newlabel{hp31}{{7.40}{98}{Algorithme Proposé}{equation.7.4.40}{}}
\newlabel{hpfa}{{7.41}{98}{Algorithme Proposé}{equation.7.4.41}{}} 135 138 \newlabel{hpfa}{{7.41}{98}{Algorithme Proposé}{equation.7.4.41}{}}
\newlabel{hpfb}{{7.42}{98}{Algorithme Proposé}{equation.7.4.42}{}} 136 139 \newlabel{hpfb}{{7.42}{98}{Algorithme Proposé}{equation.7.4.42}{}}
\newlabel{eqBetaH}{{7.43}{98}{Algorithme Proposé}{equation.7.4.43}{}} 137 140 \newlabel{eqBetaH}{{7.43}{98}{Algorithme Proposé}{equation.7.4.43}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.2}Résultats et Discussion}{98}{subsection.7.4.2}\protected@file@percent } 138 141 \@writefile{toc}{\contentsline {subsection}{\numberline {7.4.2}Résultats et Discussion}{98}{subsection.7.4.2}\protected@file@percent }
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.4.2.1}Système de recommandation avec un jeu de données d'étudiants réels (TS avec Hawkes)}{98}{subsubsection.7.4.2.1}\protected@file@percent } 139 142 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.4.2.1}Système de recommandation avec un jeu de données d'étudiants réels (TS avec Hawkes)}{98}{subsubsection.7.4.2.1}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.15}{\ignorespaces Nombre de recommandations par niveau de complexité (processus d'apprentissage statique en haut, processus d'apprentissage dynamique avec processus de Hawkes en bas)\relax }}{99}{figure.caption.69}\protected@file@percent } 140 143 \@writefile{lof}{\contentsline {figure}{\numberline {7.15}{\ignorespaces Nombre de recommandations par niveau de complexité (processus d'apprentissage statique en haut, processus d'apprentissage dynamique avec processus de Hawkes en bas)\relax }}{99}{figure.caption.69}\protected@file@percent }
\newlabel{fig:stabilityBP}{{7.15}{99}{Nombre de recommandations par niveau de complexité (processus d'apprentissage statique en haut, processus d'apprentissage dynamique avec processus de Hawkes en bas)\relax }{figure.caption.69}{}} 141 144 \newlabel{fig:stabilityBP}{{7.15}{99}{Nombre de recommandations par niveau de complexité (processus d'apprentissage statique en haut, processus d'apprentissage dynamique avec processus de Hawkes en bas)\relax }{figure.caption.69}{}}
\@writefile{toc}{\contentsline {subsubsection}{\numberline {7.4.2.2}Mesures de performances}{100}{subsubsection.7.4.2.2}\protected@file@percent } 142 145 \@writefile{toc}{\contentsline {subsubsection}{\numberline {7.4.2.2}Mesures de performances}{100}{subsubsection.7.4.2.2}\protected@file@percent }
\newlabel{metric1}{{7.44}{100}{Mesures de performances}{equation.7.4.44}{}} 143 146 \newlabel{metric1}{{7.44}{100}{Mesures de performances}{equation.7.4.44}{}}
\@writefile{lot}{\contentsline {table}{\numberline {7.11}{\ignorespaces Comparaison entre ESCBR-TS et ESCBR-TS-Hawkes lors d'un démarrage à froid.\relax }}{100}{table.caption.70}\protected@file@percent } 144 147 \@writefile{lot}{\contentsline {table}{\numberline {7.11}{\ignorespaces Comparaison entre ESCBR-TS et ESCBR-TS-Hawkes lors d'un démarrage à froid.\relax }}{100}{table.caption.70}\protected@file@percent }
\newlabel{tab:my_label}{{7.11}{100}{Comparaison entre ESCBR-TS et ESCBR-TS-Hawkes lors d'un démarrage à froid.\relax }{table.caption.70}{}} 145 148 \newlabel{tab:my_label}{{7.11}{100}{Comparaison entre ESCBR-TS et ESCBR-TS-Hawkes lors d'un démarrage à froid.\relax }{table.caption.70}{}}
\@writefile{toc}{\contentsline {subsection}{\numberline {7.4.3}Conclusion}{100}{subsection.7.4.3}\protected@file@percent } 146 149 \@writefile{toc}{\contentsline {subsection}{\numberline {7.4.3}Conclusion}{100}{subsection.7.4.3}\protected@file@percent }
\@writefile{lof}{\contentsline {figure}{\numberline {7.16}{\ignorespaces Variance pour la distribution de probabilité bêta et tous les niveaux de complexité (en haut : processus d'apprentissage statique. En bas : processus d'apprentissage dynamique avec processus de Hawkes)\relax }}{101}{figure.caption.71}\protected@file@percent } 147 150 \@writefile{lof}{\contentsline {figure}{\numberline {7.16}{\ignorespaces Variance pour la distribution de probabilité bêta et tous les niveaux de complexité (en haut : processus d'apprentissage statique. En bas : processus d'apprentissage dynamique avec processus de Hawkes)\relax }}{101}{figure.caption.71}\protected@file@percent }
\newlabel{fig:vars}{{7.16}{101}{Variance pour la distribution de probabilité bêta et tous les niveaux de complexité (en haut : processus d'apprentissage statique. En bas : processus d'apprentissage dynamique avec processus de Hawkes)\relax }{figure.caption.71}{}} 148 151 \newlabel{fig:vars}{{7.16}{101}{Variance pour la distribution de probabilité bêta et tous les niveaux de complexité (en haut : processus d'apprentissage statique. En bas : processus d'apprentissage dynamique avec processus de Hawkes)\relax }{figure.caption.71}{}}
\@setckpt{./chapters/TS}{ 149 152 \@setckpt{./chapters/TS}{
\setcounter{page}{103} 150 153 \setcounter{page}{103}
\setcounter{equation}{44} 151 154 \setcounter{equation}{44}
\setcounter{enumi}{0} 152 155 \setcounter{enumi}{0}
\setcounter{enumii}{0} 153 156 \setcounter{enumii}{0}
\setcounter{enumiii}{0} 154 157 \setcounter{enumiii}{0}
\setcounter{enumiv}{0} 155 158 \setcounter{enumiv}{0}
\setcounter{footnote}{0} 156 159 \setcounter{footnote}{0}
\setcounter{mpfootnote}{0} 157 160 \setcounter{mpfootnote}{0}
\setcounter{part}{3} 158 161 \setcounter{part}{3}
\setcounter{chapter}{7} 159 162 \setcounter{chapter}{7}
\setcounter{section}{4} 160 163 \setcounter{section}{4}
\setcounter{subsection}{3} 161 164 \setcounter{subsection}{3}
\setcounter{subsubsection}{0} 162 165 \setcounter{subsubsection}{0}
\setcounter{paragraph}{0} 163 166 \setcounter{paragraph}{0}
\setcounter{subparagraph}{0} 164 167 \setcounter{subparagraph}{0}
\setcounter{figure}{16} 165 168 \setcounter{figure}{16}
\setcounter{table}{11} 166 169 \setcounter{table}{11}
\setcounter{caption@flags}{2} 167 170 \setcounter{caption@flags}{2}
\setcounter{continuedfloat}{0} 168 171 \setcounter{continuedfloat}{0}
\setcounter{subfigure}{0} 169 172 \setcounter{subfigure}{0}
\setcounter{subtable}{0} 170 173 \setcounter{subtable}{0}
\setcounter{parentequation}{0} 171 174 \setcounter{parentequation}{0}
\setcounter{thmt@dummyctr}{0} 172 175 \setcounter{thmt@dummyctr}{0}
\setcounter{vrcnt}{0} 173 176 \setcounter{vrcnt}{0}
\setcounter{upm@subfigure@count}{0} 174 177 \setcounter{upm@subfigure@count}{0}
\setcounter{upm@fmt@mtabular@columnnumber}{0} 175 178 \setcounter{upm@fmt@mtabular@columnnumber}{0}
\setcounter{upm@format@section@sectionlevel}{2} 176 179 \setcounter{upm@format@section@sectionlevel}{2}
\setcounter{upm@fmt@savedcounter}{0} 177 180 \setcounter{upm@fmt@savedcounter}{0}
\setcounter{@@upm@fmt@inlineenumeration}{0} 178 181 \setcounter{@@upm@fmt@inlineenumeration}{0}
\setcounter{@upm@fmt@enumdescription@cnt@}{0} 179 182 \setcounter{@upm@fmt@enumdescription@cnt@}{0}
\setcounter{upmdefinition}{0} 180 183 \setcounter{upmdefinition}{0}
\setcounter{section@level}{2} 181 184 \setcounter{section@level}{2}
\setcounter{Item}{0} 182 185 \setcounter{Item}{0}
\setcounter{Hfootnote}{0} 183 186 \setcounter{Hfootnote}{0}
\setcounter{bookmark@seq@number}{89} 184 187 \setcounter{bookmark@seq@number}{89}
\setcounter{DefaultLines}{2} 185 188 \setcounter{DefaultLines}{2}
\setcounter{DefaultDepth}{0} 186 189 \setcounter{DefaultDepth}{0}
\setcounter{L@lines}{3} 187 190 \setcounter{L@lines}{3}
\setcounter{L@depth}{0} 188 191 \setcounter{L@depth}{0}
\setcounter{float@type}{8} 189 192 \setcounter{float@type}{8}
\setcounter{algorithm}{1} 190 193 \setcounter{algorithm}{1}
\setcounter{ALG@line}{8} 191 194 \setcounter{ALG@line}{8}
\setcounter{ALG@rem}{8} 192 195 \setcounter{ALG@rem}{8}
\setcounter{ALG@nested}{0} 193 196 \setcounter{ALG@nested}{0}
\setcounter{ALG@Lnr}{2} 194 197 \setcounter{ALG@Lnr}{2}
\setcounter{ALG@blocknr}{10} 195 198 \setcounter{ALG@blocknr}{10}
\setcounter{ALG@storecount}{0} 196 199 \setcounter{ALG@storecount}{0}
\setcounter{ALG@tmpcounter}{0} 197 200 \setcounter{ALG@tmpcounter}{0}
} 198 201 }
chapters/TS.tex View file @ 2a133b0
\chapter{Système de Recommandation dans AI-VT} 1 1 \chapter{Système de Recommandation dans AI-VT}
2 2
\section{Introduction} 3 3 \section{Introduction}
4 4
%\colorbox{yellow}{Il y a des différences sémantiques à bien différencier pour éviter les confusions : }\\ 5 5 %\colorbox{yellow}{Il y a des différences sémantiques à bien différencier pour éviter les confusions : }\\
%\colorbox{yellow}{Il faut harmoniser : à certains moment, on parle de 3 tests, à d'autres de 3 scénarios.}\\ 6 6 %\colorbox{yellow}{Il faut harmoniser : à certains moment, on parle de 3 tests, à d'autres de 3 scénarios.}\\
%\colorbox{yellow}{J'ai tout transformé en séances d'entrainement 1, 2, 3, mais je me suis peut-être trompé}\\ 7 7 %\colorbox{yellow}{J'ai tout transformé en séances d'entrainement 1, 2, 3, mais je me suis peut-être trompé}\\
%\colorbox{yellow}{S'il s'agit de scénarios, il faut les décrire.}\\ 8 8 %\colorbox{yellow}{S'il s'agit de scénarios, il faut les décrire.}\\
%\\ 9 9 %\\
% 10 10 %
%\colorbox{yellow}{Idem pour "modèle", "algorithme", "système", "module", "programme"}\\ 11 11 %\colorbox{yellow}{Idem pour "modèle", "algorithme", "système", "module", "programme"}\\
%\colorbox{yellow}{Un système est constitué de modules,}\\ 12 12 %\colorbox{yellow}{Un système est constitué de modules,}\\
%\colorbox{yellow}{Un module est constitué de programmes, }\\ 13 13 %\colorbox{yellow}{Un module est constitué de programmes, }\\
%\colorbox{yellow}{Un programme suit un algorithme,}\\ 14 14 %\colorbox{yellow}{Un programme suit un algorithme,}\\
%\colorbox{yellow}{Un algorithme fonctionne selon un modèle}\\ 15 15 %\colorbox{yellow}{Un algorithme fonctionne selon un modèle}\\
%\colorbox{yellow}{Il faudrait revoir cette sémantique en particulier dans le chapitre précédent.} 16 16 %\colorbox{yellow}{Il faudrait revoir cette sémantique en particulier dans le chapitre précédent.}
%\\ 17 17 %\\
% 18 18 %
%\colorbox{yellow}{Idem pour "métrique", "équation", "mesure", "résultat"}\\ 19 19 %\colorbox{yellow}{Idem pour "métrique", "équation", "mesure", "résultat"}\\
%\colorbox{yellow}{Une métrique permet de définir une graduation (comme le mètre ou le litre),}\\ 20 20 %\colorbox{yellow}{Une métrique permet de définir une graduation (comme le mètre ou le litre),}\\
%\colorbox{yellow}{Une équation donne la formule mathématique ou la fonction}\\ 21 21 %\colorbox{yellow}{Une équation donne la formule mathématique ou la fonction}\\
%\colorbox{yellow}{permettant de calculer un résultat selon une métrique}\\ 22 22 %\colorbox{yellow}{permettant de calculer un résultat selon une métrique}\\
%\colorbox{yellow}{Une mesure est la valeur numérique obtenue, résultant de l'application de l'équation}\\ 23 23 %\colorbox{yellow}{Une mesure est la valeur numérique obtenue, résultant de l'application de l'équation}\\
%\colorbox{yellow}{Ainsi, on compare les valeurs dans une métrique, valeurs obtenues en appliquant une équation.}\\ 24 24 %\colorbox{yellow}{Ainsi, on compare les valeurs dans une métrique, valeurs obtenues en appliquant une équation.}\\
25 25
L'un des principaux modules d'un EIAH est le système de recommandation visant à trouver les faiblesses et à réviser la séance d'entraînement proposée initialement par celui-ci. Ce type de module permet donc au système de personnaliser les contenus et les exercices en fonction des besoins et des résultats de chacun des apprenants. Certains auteurs n'hésitent pas à considérer que l'efficacité d'un EIAH dans l'acquisition des connaissances et l'adaptation aux différents types d'apprentissage dépend de ce type de module fondé sur la recommandation \cite{Liu2023}. 26 26 L'un des principaux modules d'un EIAH est le système de recommandation visant à trouver les faiblesses et à réviser la séance d'entraînement proposée initialement par celui-ci. Ce type de module permet donc au système de personnaliser les contenus et les exercices en fonction des besoins et des résultats de chacun des apprenants. Certains auteurs n'hésitent pas à considérer que l'efficacité d'un EIAH dans l'acquisition des connaissances et l'adaptation aux différents types d'apprentissage dépend de ce type de module fondé sur la recommandation \cite{Liu2023}.
27 27
Les systèmes de recommandation dans les environnements d'apprentissage prennent en compte les exigences, les besoins, le profil, les acquis, compétences, les intérêts et l'évolution de l'apprenant pour adapter et recommander des ressources ou des exercices. Dans ces systèmes, l'adaptation peut être de deux types : l'adaptation de la présentation qui montre aux apprenants des ressources d'étude en fonction de leurs faiblesses et/ou l'adaptation du parcours qui change la structure du cours en fonction du niveau et du style d'apprentissage de chaque apprenant \cite{MUANGPRATHUB2020e05227}. 28 28 Les systèmes de recommandation dans les environnements d'apprentissage prennent en compte les exigences, les besoins, le profil, les acquis, compétences, les intérêts et l'évolution de l'apprenant pour adapter et recommander des ressources ou des exercices. Dans ces systèmes, l'adaptation peut être de deux types : l'adaptation de la présentation qui montre aux apprenants des ressources d'étude en fonction de leurs faiblesses et/ou l'adaptation du parcours qui change la structure du cours en fonction du niveau et du style d'apprentissage de chaque apprenant \cite{MUANGPRATHUB2020e05227}.
29 29
Parmi les algorithmes les plus prometteurs pouvant aider à proposer des recommandations, nous avons identifié l'algorithme d'échantillonnage de Thompson (TS). Il s'agit d'un algorithme probabiliste appartenant à la catégorie des algorithmes d'apprentissage par renforcement. À l'instant $t$, TS choisit l'action $a_t$ d'un ensemble $A$ d'actions possibles, et obtient une récompense pour celle-ci. À $t+1$, une action $a_{t+1}$ est sélectionnée en tenant compte de la récompense précédente. L'objectif consiste à maximiser la récompense. Selon le principe bayésien, cette maximisation itérative est opérée en suivant une distribution de probabilité évoluant à chaque itération. Cette évolution peut être calculée selon la variante de Bernoulli 30 30 Parmi les algorithmes les plus prometteurs pouvant aider à proposer des recommandations, nous avons identifié l'algorithme d'échantillonnage de Thompson (TS). Il s'agit d'un algorithme probabiliste appartenant à la catégorie des algorithmes d'apprentissage par renforcement. À l'instant $t$, TS choisit l'action $a_t$ d'un ensemble $A$ d'actions possibles, et obtient une récompense pour celle-ci. À $t+1$, une action $a_{t+1}$ est sélectionnée en tenant compte de la récompense précédente. L'objectif consiste à maximiser la récompense. Selon le principe bayésien, cette maximisation itérative est opérée en suivant une distribution de probabilité évoluant à chaque itération. Cette évolution peut être calculée selon la variante de Bernoulli
%\colorbox{yellow}{de la loi de bernoulli ?} 31 31 %\colorbox{yellow}{de la loi de bernoulli ?}
où la récompense n'a que deux valeurs possibles 0 ou 1 (échec ou succès), ou selon une distribution \textit{Beta} définie sur l'intervalle $[0, 1]$ et calculée en fonction de deux valeurs $\alpha$ et $\beta$ \cite{9870279}. 32 32 où la récompense n'a que deux valeurs possibles 0 ou 1 (échec ou succès), ou selon une distribution \textit{Beta} définie sur l'intervalle $[0, 1]$ et calculée en fonction de deux valeurs $\alpha$ et $\beta$ \cite{9870279}.
33 33
Ce chapitre est divisé en trois parties, la première partie présente un algorithme délivrant des recommandations en fonction des résultats produits par l'apprenant en temps réel. Une partie de cette proposition est publiée dans \cite{Soto2}. Cet algorithme permet l'adaptation automatique en temps réel d'une séance prédéterminée dans l'EIAH AI-VT. Nous considérons qu'elle intervient durant la phase de révision du cycle classique du raisonnement à partir de cas (RàPC). L'algorithme proposé est stochastique et il a été testé selon trois scénarios différents. Les résultats montrent de quelle manière AI-VT peut proposer des recommandations pertinentes selon les faiblesses identifiées de l'apprenant. 34 34 Ce chapitre est divisé en trois parties, la première partie présente un algorithme délivrant des recommandations en fonction des résultats produits par l'apprenant en temps réel. Une partie de cette proposition est publiée dans \cite{Soto2}. Cet algorithme permet l'adaptation automatique en temps réel d'une séance prédéterminée dans l'EIAH AI-VT. Nous considérons qu'elle intervient durant la phase de révision du cycle classique du raisonnement à partir de cas (RàPC). L'algorithme proposé est stochastique et il a été testé selon trois scénarios différents. Les résultats montrent de quelle manière AI-VT peut proposer des recommandations pertinentes selon les faiblesses identifiées de l'apprenant.
35 35
La deuxième partie de ce chapitre montre l'intégration de tous les algorithmes présentés dans les chapitres précédents à AI-VT. L'algorithme intégré est appliqué au système AI-VT sur des données générées et des données réelles. Plusieurs types de test sont exécutés pour montrer que l'algorithme final permet en effet d'améliorer les capacités d'identification et d'adaptation. Les performances de ce nouveau algorithme sont comparées à celles d'autres algorithmes. Enfin, l'évolution de l'acquisition des connaissances induites par ces nouveaux algorithmes de recommandation stochastiques est analysée dans cette deuxième partie du présent chapitre. 36 36 La deuxième partie de ce chapitre montre l'intégration de tous les algorithmes présentés dans les chapitres précédents à AI-VT. L'algorithme intégré est appliqué au système AI-VT sur des données générées et des données réelles. Plusieurs types de test sont exécutés pour montrer que l'algorithme final permet en effet d'améliorer les capacités d'identification et d'adaptation. Les performances de ce nouveau algorithme sont comparées à celles d'autres algorithmes. Enfin, l'évolution de l'acquisition des connaissances induites par ces nouveaux algorithmes de recommandation stochastiques est analysée dans cette deuxième partie du présent chapitre.
37 37
Pour terminer, dans la troisième partie de ce chapitre, nous présentons une évolution de ce système de recommandation intégrant le processus de Hawkes. L'intérêt de ce dernier réside dans le fait qu'il utilise une courbe d'oubli, nous permettant ainsi de tenir compte du fait que certaines connaissances et certains mécanismes doivent être rappelés aux apprenants. Cette troisième partie intègre une étude des performances du système de recommandation incluant ce processus stochastique de Hawkes. 38 38 Pour terminer, dans la troisième partie de ce chapitre, nous présentons une évolution de ce système de recommandation intégrant le processus de Hawkes. L'intérêt de ce dernier réside dans le fait qu'il utilise une courbe d'oubli, nous permettant ainsi de tenir compte du fait que certaines connaissances et certains mécanismes doivent être rappelés aux apprenants. Cette troisième partie intègre une étude des performances du système de recommandation incluant ce processus stochastique de Hawkes.
39 39
\section{Système de recommandation stochastique fondé sur l'échantillonnage de Thompson} 40 40 \section{Système de recommandation stochastique fondé sur l'échantillonnage de Thompson}
\sectionmark{Système de recommandation fondé sur TS} 41 41 \sectionmark{Système de recommandation fondé sur TS}
42 42
\subsection{Algorithme Proposé} 43 43 \subsection{Algorithme Proposé}
44 44
L'algorithme proposé, en tant que système de recommandation, prend en compte les notes antérieures des apprenants pour estimer leurs connaissances et leur maîtrise des différentes compétences, sous-compétences et niveaux de complexité au sein du système AI-VT. Puis il adapte les séances pour maximiser l'acquisition des connaissances et la maîtrise des différents domaines contenus dans la même compétence définie. 45 45 L'algorithme proposé, en tant que système de recommandation, prend en compte les notes antérieures des apprenants pour estimer leurs connaissances et leur maîtrise des différentes compétences, sous-compétences et niveaux de complexité au sein du système AI-VT. Puis il adapte les séances pour maximiser l'acquisition des connaissances et la maîtrise des différents domaines contenus dans la même compétence définie.
46 46
La famille de distributions de probabilité Beta est utilisée pour définir dynamiquement le niveau de complexité (équation \ref{eqBeta}) à proposer à l'apprenant. Cet algorithme permet de recommander des niveaux de complexité non contigus et dans lesquels des lacunes ont été détectées. Les paramètres initiaux des distributions de probabilité peuvent forcer le système à recommander des niveaux de complexité contigus (juste inférieur ou supérieur). 47 47 La famille de distributions de probabilité Beta est utilisée pour définir dynamiquement le niveau de complexité (équation \ref{eqBeta}) à proposer à l'apprenant. Cet algorithme permet de recommander des niveaux de complexité non contigus et dans lesquels des lacunes ont été détectées. Les paramètres initiaux des distributions de probabilité peuvent forcer le système à recommander des niveaux de complexité contigus (juste inférieur ou supérieur).
48 48
\begin{equation} 49 49 \begin{equation}
B(x, \alpha, \beta) = 50 50 B(x, \alpha, \beta) =
\begin{cases} 51 51 \begin{cases}
\frac{x^{\alpha-1}(1-x)^{\beta - 1}}{\int_0^1 u^{\alpha - 1}(1-u)^{\beta - 1}du} & si \; x \in [0, 1] \\ 52 52 \frac{x^{\alpha-1}(1-x)^{\beta - 1}}{\int_0^1 u^{\alpha - 1}(1-u)^{\beta - 1}du} & si \; x \in [0, 1] \\
0&sinon 53 53 0&sinon
\end{cases} 54 54 \end{cases}
\label{eqBeta} 55 55 \label{eqBeta}
\end{equation} 56 56 \end{equation}
57 57
\begin{table}[!ht] 58 58 \begin{table}[!ht]
\centering 59 59 \centering
\begin{tabular}{ccc} 60 60 \begin{tabular}{ccc}
ID&Description&Domaine\\ 61 61 ID&Description&Domaine\\
\hline 62 62 \hline
$c_n$&Niveaux de complexité&$\mathbb{N} \; | \; c_n>0$\\ 63 63 $c_n$&Niveaux de complexité&$\mathbb{N} \; | \; c_n>0$\\
$g_m$&Valeur maximale dans l'échelle des notes& $\mathbb{N} \;|\; g_m>0$ \\ 64 64 $g_m$&Valeur maximale dans l'échelle des notes& $\mathbb{N} \;|\; g_m>0$ \\
$g_t$&Seuil de notation &$(0, g_m) \in \mathbb{R}$\\ 65 65 $g_t$&Seuil de notation &$(0, g_m) \in \mathbb{R}$\\
$s$&Nombre de parcours définis&$\mathbb{N} \; | \; s>0$\\ 66 66 $s$&Nombre de parcours définis&$\mathbb{N} \; | \; s>0$\\
$s_c$&Parcours courant fixe défini&$[1, s] \in \mathbb{N}$\\ 67 67 $s_c$&Parcours courant fixe défini&$[1, s] \in \mathbb{N}$\\
$\Delta s$&Pas pour les paramètres de la distribution bêta dans le parcours $s$ &$(0,1) \in \mathbb{R}$\\ 68 68 $\Delta s$&Pas pour les paramètres de la distribution bêta dans le parcours $s$ &$(0,1) \in \mathbb{R}$\\
$t_m$&Valeur maximale du temps de réponse&$\mathbb{R} \; | \; t_m>0$\\ 69 69 $t_m$&Valeur maximale du temps de réponse&$\mathbb{R} \; | \; t_m>0$\\
$g_{c}$&Note de l'apprenant à une question de complexité $c$&$[0, g_m] \in \mathbb{R}$\\ 70 70 $g_{c}$&Note de l'apprenant à une question de complexité $c$&$[0, g_m] \in \mathbb{R}$\\
$ng_c$&Grade de l'apprenant avec pénalisation du temps &$[0, g_m] \in \mathbb{R}$\\ 71 71 $ng_c$&Grade de l'apprenant avec pénalisation du temps &$[0, g_m] \in \mathbb{R}$\\
$t_{c}$&Le temps de réponse à une question de complexité $c$&$[0, t_m] \in \mathbb{R}$\\ 72 72 $t_{c}$&Le temps de réponse à une question de complexité $c$&$[0, t_m] \in \mathbb{R}$\\
$ncl$&Nouveau niveau de complexité calculé&$\mathbb{N}$\\ 73 73 $ncl$&Nouveau niveau de complexité calculé&$\mathbb{N}$\\
$\alpha_{c}$&Valeur de $\alpha$ dans la complexité $c$&$\mathbb{R} \; | \; \alpha_{c}>0$\\ 74 74 $\alpha_{c}$&Valeur de $\alpha$ dans la complexité $c$&$\mathbb{R} \; | \; \alpha_{c}>0$\\
$\beta_{c}$&Valeur de $\beta$ dans la complexité $c$&$\mathbb{R} \; | \; \beta_{c}>0$\\ 75 75 $\beta_{c}$&Valeur de $\beta$ dans la complexité $c$&$\mathbb{R} \; | \; \beta_{c}>0$\\
$\Delta \beta$&Pas initial du paramètre bêta&$\mathbb{N} \; | \; \Delta \beta >0$\\ 76 76 $\Delta \beta$&Pas initial du paramètre bêta&$\mathbb{N} \; | \; \Delta \beta >0$\\
$\lambda$&Poids de la pénalisation temporelle&$(0,1) \in \mathbb{R}$\\ 77 77 $\lambda$&Poids de la pénalisation temporelle&$(0,1) \in \mathbb{R}$\\
$G_c$&Ensemble de $d$ notes dans le niveau de complexité $c$&$\mathbb{R}^d \;, d\in \mathbb{N} \; | \; d>0$\\ 78 78 $G_c$&Ensemble de $d$ notes dans le niveau de complexité $c$&$\mathbb{R}^d \;, d\in \mathbb{N} \; | \; d>0$\\
$x_c$&Notes moyennes normalisées&$[0, 1] \in \mathbb{R}$\\ 79 79 $x_c$&Notes moyennes normalisées&$[0, 1] \in \mathbb{R}$\\
$n_c$&Nombre total de questions dans une séance&$\mathbb{N} \; | \; n_c>0$\\ 80 80 $n_c$&Nombre total de questions dans une séance&$\mathbb{N} \; | \; n_c>0$\\
$ny_c$&Nombre de questions dans le niveau de complexité $c$&$\mathbb{N} \; | \; 0<ny_c \le n_c$\\ 81 81 $ny_c$&Nombre de questions dans le niveau de complexité $c$&$\mathbb{N} \; | \; 0<ny_c \le n_c$\\
$y_c$&Proportion de questions dans le niveau de complexité $c$&$[0, 1] \in \mathbb{R}$\\ 82 82 $y_c$&Proportion de questions dans le niveau de complexité $c$&$[0, 1] \in \mathbb{R}$\\
$r$&Valeur totale de la métrique définie pour l'adaptabilité&$[0, c_n] \in \mathbb{R}$\\ 83 83 $r$&Valeur totale de la métrique définie pour l'adaptabilité&$[0, c_n] \in \mathbb{R}$\\
$sc$&Valeur totale de la métrique de similarité cosinus&$[-1, 1] \in \mathbb{R}$\\ 84 84 $sc$&Valeur totale de la métrique de similarité cosinus&$[-1, 1] \in \mathbb{R}$\\
\end{tabular} 85 85 \end{tabular}
\caption{Variables et paramètres du système de recommandation proposé} 86 86 \caption{Variables et paramètres du système de recommandation proposé}
\label{tabPar} 87 87 \label{tabPar}
\end{table} 88 88 \end{table}
89 89
Le tableau \ref{tabPar} présente les variables de l'algorithme de recommandation. Nous avons considéré que les notes $g_c$ obtenues au niveau de complexité $c$ tiennent compte du temps de réponse. C'est la raison pour laquelle, nous avons défini le grade de l'apprenant $ng_c$ au niveau de complexité $c$. Ce grade calculé selon l'équation \ref{eqsGT}, tient compte d'un poids de pénalisation temporelle $\lambda$. 90 90 Le tableau \ref{tabPar} présente les variables de l'algorithme de recommandation. Nous avons considéré que les notes $g_c$ obtenues au niveau de complexité $c$ tiennent compte du temps de réponse. C'est la raison pour laquelle, nous avons défini le grade de l'apprenant $ng_c$ au niveau de complexité $c$. Ce grade calculé selon l'équation \ref{eqsGT}, tient compte d'un poids de pénalisation temporelle $\lambda$.
91 91
\begin{equation} 92 92 \begin{equation}
ng_c=g_c- \left(g_c * \lambda * \frac{t_c}{t_m} \right) 93 93 ng_c=g_c- \left(g_c * \lambda * \frac{t_c}{t_m} \right)
\label{eqsGT} 94 94 \label{eqsGT}
\end{equation} 95 95 \end{equation}
96 96
Dans cet algorithme, la variable de seuil de grade $g_t$ détermine la variabilité de la distribution de probabilité pour chaque niveau de complexité. Les niveaux de complexité des exercices proposés à l'apprenant sont calculés par récompense inverse selon les équations \ref{eqgtc} et \ref{eqltc}. Chaque niveau de complexité est associé à une distribution de probabilité Beta avec des valeurs initiales $\alpha$ et $\beta$ prédéfinies. 97 97 Dans cet algorithme, la variable de seuil de grade $g_t$ détermine la variabilité de la distribution de probabilité pour chaque niveau de complexité. Les niveaux de complexité des exercices proposés à l'apprenant sont calculés par récompense inverse selon les équations \ref{eqgtc} et \ref{eqltc}. Chaque niveau de complexité est associé à une distribution de probabilité Beta avec des valeurs initiales $\alpha$ et $\beta$ prédéfinies.
98 98
\begin{equation} 99 99 \begin{equation}
ng_c \ge g_t \rightarrow 100 100 ng_c \ge g_t \rightarrow
\begin{cases} 101 101 \begin{cases}
\beta_c=\beta_c+\Delta_s\\ 102 102 \beta_c=\beta_c+\Delta_s\\
\beta_{c-1}=\beta_{c-1} + \frac{\Delta_s}{2}\\ 103 103 \beta_{c-1}=\beta_{c-1} + \frac{\Delta_s}{2}\\
\alpha_{c+1}=\alpha_{c+1} + \frac{\Delta_s}{2} 104 104 \alpha_{c+1}=\alpha_{c+1} + \frac{\Delta_s}{2}
\end{cases} 105 105 \end{cases}
\label{eqgtc} 106 106 \label{eqgtc}
\end{equation} 107 107 \end{equation}
108 108
\begin{equation} 109 109 \begin{equation}
ng_c < g_t \rightarrow 110 110 ng_c < g_t \rightarrow
\begin{cases} 111 111 \begin{cases}
\alpha_c=\alpha_c+\Delta_s\\ 112 112 \alpha_c=\alpha_c+\Delta_s\\
\alpha_{c-1}=\alpha_{c-1} + \frac{\Delta_s}{2}\\ 113 113 \alpha_{c-1}=\alpha_{c-1} + \frac{\Delta_s}{2}\\
\beta_{c+1}=\beta_{c+1} + \frac{\Delta_s}{2} 114 114 \beta_{c+1}=\beta_{c+1} + \frac{\Delta_s}{2}
\end{cases} 115 115 \end{cases}
\label{eqltc} 116 116 \label{eqltc}
\end{equation} 117 117 \end{equation}
118 118
Pour chaque niveau de complexité $c$, $Beta(\alpha_c, \beta_c)$ fournit une distribution de probabilité $\theta_c$ dont nous calculons l'espérance $\mathbb{E}[\theta_c]$. Le nouveau niveau de complexité $ncl$ correspond à l'espérance maximale obtenue. 119 119 Pour chaque niveau de complexité $c$, $Beta(\alpha_c, \beta_c)$ fournit une distribution de probabilité $\theta_c$ dont nous calculons l'espérance $\mathbb{E}[\theta_c]$. Le nouveau niveau de complexité $ncl$ correspond à l'espérance maximale obtenue.
120 120
Le détail des pas d'exécution de l'algorithme proposé sont dans l'algorithme \ref{alg2}. 121 121 Le détail des pas d'exécution de l'algorithme proposé sont dans l'algorithme \ref{alg2}.
122 122
\begin{algorithm} 123 123 \begin{algorithm}
\caption{Algorithme de recommandation stochastique} 124 124 \caption{Algorithme de recommandation stochastique}
\begin{algorithmic} 125 125 \begin{algorithmic}
\State Initialisation de la distribution de probabilité 126 126 \State Initialisation de la distribution de probabilité
\For {\textbf{each} question $q$} 127 127 \For {\textbf{each} question $q$}
\State Soit le niveau de complexité $i$ 128 128 \State Soit le niveau de complexité $i$
\State $ng_i=g_i- \left(g_i * \lambda * \frac{t_i}{t_m} \right)$ \Comment{eq \ref{eqsGT}} 129 129 \State $ng_i=g_i- \left(g_i * \lambda * \frac{t_i}{t_m} \right)$ \Comment{eq \ref{eqsGT}}
\State Calculs des paramètres $\alpha_i$ et $\beta_i$ \Comment{eq \ref{eqgtc} et eq \ref{eqltc}} 130 130 \State Calculs des paramètres $\alpha_i$ et $\beta_i$ \Comment{eq \ref{eqgtc} et eq \ref{eqltc}}
\State Choisir $\theta_c$ selon la distribution de probabilité Beta \Comment{$\forall c, \theta_c = Beta(\alpha_c, \beta_c)$} 131 131 \State Choisir $\theta_c$ selon la distribution de probabilité Beta \Comment{$\forall c, \theta_c = Beta(\alpha_c, \beta_c)$}
\State $ncl = max(\mathbb{E}[\theta_c]), \forall c$ 132 132 \State $ncl = max(\mathbb{E}[\theta_c]), \forall c$
\EndFor 133 133 \EndFor
\end{algorithmic} 134 134 \end{algorithmic}
\label{alg2} 135 135 \label{alg2}
\end{algorithm} 136 136 \end{algorithm}
137 137
\subsection{Résultats} 138 138 \subsection{Résultats}
139 139
Le comportement du module de recommandation a été testé avec des données générées contenant les notes et les temps de réponse de mille apprenants pour cinq niveaux de complexité différents. Ces données sont décrites dans le tableau \ref{tabDataSet}. Les notes des apprenants sont générées selon la loi de probabilité logit-normale considérée comme la plus fidèle dans ce contexte par \cite{Arthurs}. 140 140 Le comportement du module de recommandation a été testé avec des données générées contenant les notes et les temps de réponse de mille apprenants pour cinq niveaux de complexité différents. Ces données sont décrites dans le tableau \ref{tabDataSet}. Les notes des apprenants sont générées selon la loi de probabilité logit-normale considérée comme la plus fidèle dans ce contexte par \cite{Arthurs}.
%\colorbox{yellow}{<== OK pour toi, Daniel ?} 141 141 %\colorbox{yellow}{<== OK pour toi, Daniel ?}
142 142
L'ensemble de données générées résulte d'une simulation des notes obtenues par des apprenants virtuels ayant répondu à quinze questions réparties sur cinq niveaux de complexité. L'ensemble de données simule, via la distribution de probabilité logit-normale, une faiblesse dans chaque niveau de complexité pour 70\% des apprenants sur les dix premières questions. La difficulté de la complexité est quant à elle simulée en réduisant le score moyen et en augmentant la variance. La figure \ref{figData} montre la manière dont sont réparties les notes selon le niveau de complexité. 143 143 L'ensemble de données générées résulte d'une simulation des notes obtenues par des apprenants virtuels ayant répondu à quinze questions réparties sur cinq niveaux de complexité. L'ensemble de données simule, via la distribution de probabilité logit-normale, une faiblesse dans chaque niveau de complexité pour 70\% des apprenants sur les dix premières questions. La difficulté de la complexité est quant à elle simulée en réduisant le score moyen et en augmentant la variance. La figure \ref{figData} montre la manière dont sont réparties les notes selon le niveau de complexité.
144 144
\begin{figure} 145 145 \begin{figure}
\includegraphics[width=\textwidth]{./Figures/dataset.png} 146 146 \includegraphics[width=\textwidth]{./Figures/dataset.png}
\caption{Répartition des notes générées selon le niveau de complexité.} 147 147 \caption{Répartition des notes générées selon le niveau de complexité.}
\label{figData} 148 148 \label{figData}
\end{figure} 149 149 \end{figure}
150 150
\begin{table}[!ht] 151 151 \begin{table}[!ht]
\centering 152 152 \centering
\begin{tabular}{ccc} 153 153 \begin{tabular}{ccc}
ID&Description&Domaine\\ 154 154 ID&Description&Domaine\\
\hline 155 155 \hline
$q_{c}$&Niveau de complexité de une question $q$&$[0, c_n] \in \mathbb{N}$\\ 156 156 $q_{c}$&Niveau de complexité de une question $q$&$[0, c_n] \in \mathbb{N}$\\
$q_{g,c}$&Note obtenue $g$ pour la question $q$ avec complexité $c$ &$[0,g_m] \in \mathbb{R}$\\ 157 157 $q_{g,c}$&Note obtenue $g$ pour la question $q$ avec complexité $c$ &$[0,g_m] \in \mathbb{R}$\\
$q_{t,c}$&Temps employé $t$ pour une question $q$ avec complexité $c$&$[0, t_m] \in \mathbb{R}$\\ 158 158 $q_{t,c}$&Temps employé $t$ pour une question $q$ avec complexité $c$&$[0, t_m] \in \mathbb{R}$\\
\end{tabular} 159 159 \end{tabular}
\caption{Description des données utilisées pour l'évaluation.} 160 160 \caption{Description des données utilisées pour l'évaluation.}
\label{tabDataSet} 161 161 \label{tabDataSet}
\end{table} 162 162 \end{table}
163 163
Toutes les valeurs des paramètres pour tester l'algorithme sont dans le tableau \ref{tabgm1}. 164 164 Toutes les valeurs des paramètres pour tester l'algorithme sont dans le tableau \ref{tabgm1}.
165 165
\begin{table}[!ht] 166 166 \begin{table}[!ht]
\centering 167 167 \centering
\begin{tabular}{c|cccccccccccccc} 168 168 \begin{tabular}{c|cccccccccccccc}
ID&$c_n$&$g_m$&$t_m$&$s$&$s_c$&$\lambda$&$g_t$&$\alpha_{x,1}$&$\alpha_{x,y}$&$\beta_{x,1}$&$\Delta \beta_{x,y}$&$\Delta_1$&$\Delta_2$&$\Delta_3$\\ 169 169 ID&$c_n$&$g_m$&$t_m$&$s$&$s_c$&$\lambda$&$g_t$&$\alpha_{x,1}$&$\alpha_{x,y}$&$\beta_{x,1}$&$\Delta \beta_{x,y}$&$\Delta_1$&$\Delta_2$&$\Delta_3$\\
\hline 170 170 \hline
Valeur&5&10&120&3&2&0.25&6 & 2 & 1 & 1 & 1 & 0.3 & 0.5 & 0.7\\ 171 171 Valeur&5&10&120&3&2&0.25&6 & 2 & 1 & 1 & 1 & 0.3 & 0.5 & 0.7\\
\end{tabular} 172 172 \end{tabular}
\caption{Valeurs des paramètres pour les scénarios évalués} 173 173 \caption{Valeurs des paramètres pour les scénarios évalués}
\label{tabgm1} 174 174 \label{tabgm1}
\end{table} 175 175 \end{table}
176 176
La figure \ref{figCmp2} permet de comparer les résultats obtenus par le module proposé, un système de recommandation déterministe et le système AI-VT initial lors d'un \textit{démarrage à froid} (c'est-à-dire sans données historiques ni informations préalables sur le profil de l'apprenant). Sur les graphiques de cette figure, les numéros des questions posées sont reportées en abscisse selon l'ordre chronologique d'apparition durant la séance d’entraînement, le niveau de complexité de chaque question posée est représenté par une couleur différente, et le nombre d'apprenants ayant eu des questions de ce niveau de complexité sont reportés en ordonnées. Ainsi, le système AI-VT initial (premier graphique de la figure) et le système de recommandation déterministe (deuxième graphique) ont tous deux proposé trois questions de niveau de complexité 0 (le plus faible) à tous les apprenants au démarrage de la séance d'entrainement. Nous pouvons remarquer que le système initial est resté sur ce niveau de complexité durant toute la séance (pour les 15 questions du test), tandis que le système de recommandation déterministe a progressivement mixé les complexités des questions posées. Le système de recommandation stochastique décrit dans ce chapitre a quant à lui mixé ces niveaux de complexité dès la première question. 177 177 La figure \ref{figCmp2} permet de comparer les résultats obtenus par le module proposé, un système de recommandation déterministe et le système AI-VT initial lors d'un \textit{démarrage à froid} (c'est-à-dire sans données historiques ni informations préalables sur le profil de l'apprenant). Sur les graphiques de cette figure, les numéros des questions posées sont reportées en abscisse selon l'ordre chronologique d'apparition durant la séance d’entraînement, le niveau de complexité de chaque question posée est représenté par une couleur différente, et le nombre d'apprenants ayant eu des questions de ce niveau de complexité sont reportés en ordonnées. Ainsi, le système AI-VT initial (premier graphique de la figure) et le système de recommandation déterministe (deuxième graphique) ont tous deux proposé trois questions de niveau de complexité 0 (le plus faible) à tous les apprenants au démarrage de la séance d'entrainement. Nous pouvons remarquer que le système initial est resté sur ce niveau de complexité durant toute la séance (pour les 15 questions du test), tandis que le système de recommandation déterministe a progressivement mixé les complexités des questions posées. Le système de recommandation stochastique décrit dans ce chapitre a quant à lui mixé ces niveaux de complexité dès la première question.
178 178
Ainsi, les systèmes de recommandation permettent de proposer une adaptation progressive du niveau de complexité en fonction des notes obtenues. L'algorithme déterministe génère quatre grandes transitions avec un grand nombre d'apprenants dans les questions 5, 6, 8 et 12, toutes entre des niveaux de complexité contigus. La tendance est à la baisse pour les niveaux 0, 1 et 2 après la huitième question et à la hausse pour les niveaux 1 et 3. L'algorithme stochastique commence par proposer tous les niveaux de complexité possibles tout en privilégiant le niveau 0. Avec ce système, les transitions sont constantes mais pour un petit nombre d'apprenants. La tendance après la dixième question est à la baisse pour les niveaux 0 et 4 et à la hausse pour les niveaux 1, 2 et 3. 179 179 Ainsi, les systèmes de recommandation permettent de proposer une adaptation progressive du niveau de complexité en fonction des notes obtenues. L'algorithme déterministe génère quatre grandes transitions avec un grand nombre d'apprenants dans les questions 5, 6, 8 et 12, toutes entre des niveaux de complexité contigus. La tendance est à la baisse pour les niveaux 0, 1 et 2 après la huitième question et à la hausse pour les niveaux 1 et 3. L'algorithme stochastique commence par proposer tous les niveaux de complexité possibles tout en privilégiant le niveau 0. Avec ce système, les transitions sont constantes mais pour un petit nombre d'apprenants. La tendance après la dixième question est à la baisse pour les niveaux 0 et 4 et à la hausse pour les niveaux 1, 2 et 3.
180 180
\begin{figure} 181 181 \begin{figure}
\includegraphics[width=\textwidth]{./Figures/comp2.png} 182 182 \includegraphics[width=\textwidth]{./Figures/comp2.png}
\caption{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la première séance avec un démarrage à froid (sans données initiales sur les apprenants). Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)} 183 183 \caption{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la première séance avec un démarrage à froid (sans données initiales sur les apprenants). Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)}
\label{figCmp2} 184 184 \label{figCmp2}
\end{figure} 185 185 \end{figure}
186 186
Après la génération de la première séance, le système peut continuer avec une deuxième liste d'exercices. Pour cette partie des tests, les trois algorithmes ont été initialisés avec les mêmes données, et des valeurs égales pour tous les apprenants. La figure \ref{figCmp3} permet de voir que la première transition du système initial n'intervient qu'entre deux séances. Les transitions sont très lentes, et tous les apprenants doivent suivre un chemin identique même s'ils obtiennent des notes différentes au cours de celle-ci. 187 187 Après la génération de la première séance, le système peut continuer avec une deuxième liste d'exercices. Pour cette partie des tests, les trois algorithmes ont été initialisés avec les mêmes données, et des valeurs égales pour tous les apprenants. La figure \ref{figCmp3} permet de voir que la première transition du système initial n'intervient qu'entre deux séances. Les transitions sont très lentes, et tous les apprenants doivent suivre un chemin identique même s'ils obtiennent des notes différentes au cours de celle-ci.
188 188
Pour leur part, les deux autres systèmes de recommandation testés proposent un fonctionnement différent. L'algorithme déterministe présente trois transitions aux questions 3, 5 et 12. Les tendances sont y relativement homogènes et progressives pour le niveau 3, très variables pour le niveau 2 et fortement décroissantes pour le niveau 0. L'algorithme stochastique quant à lui, propose des transitions douces mais il a tendance à toujours privilégier le niveau le plus faible. Nous pouvons observer une prépondérance du niveau 1 avec ce système. Ici, les niveaux 0 et 1 sont décroissants, le niveau 2 est statique et les niveaux 3 et 4 sont ascendants. 189 189 Pour leur part, les deux autres systèmes de recommandation testés proposent un fonctionnement différent. L'algorithme déterministe présente trois transitions aux questions 3, 5 et 12. Les tendances sont y relativement homogènes et progressives pour le niveau 3, très variables pour le niveau 2 et fortement décroissantes pour le niveau 0. L'algorithme stochastique quant à lui, propose des transitions douces mais il a tendance à toujours privilégier le niveau le plus faible. Nous pouvons observer une prépondérance du niveau 1 avec ce système. Ici, les niveaux 0 et 1 sont décroissants, le niveau 2 est statique et les niveaux 3 et 4 sont ascendants.
190 190
\begin{figure} 191 191 \begin{figure}
\includegraphics[width=\textwidth]{./Figures/comp3.png} 192 192 \includegraphics[width=\textwidth]{./Figures/comp3.png}
\caption{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la deuxième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)} 193 193 \caption{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la deuxième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)}
\label{figCmp3} 194 194 \label{figCmp3}
\end{figure} 195 195 \end{figure}
196 196
Les questions de la première et la deuxième séance étant de niveaux 0 et 1, le système a proposé des niveaux de complexité 1 ou 2 pour la troisième séance. La figure \ref{figCmp4} montre que le système initial est très lent à passer d'un niveau à l'autre. La figure \ref{figCmp3} permet de voir la première transition du système initial ne réagissant qu'aux notes obtenues dans les séances précédentes et non à celles de la séance en cours. Dans ce cas, l'algorithme de recommandation déterministe adopte la même stratégie et propose un changement brutal à tous les apprenants autour de la cinquième question. L'algorithme stochastique continue avec des changements progressifs tout en privilégiant le niveau 2. 197 197 Les questions de la première et la deuxième séance étant de niveaux 0 et 1, le système a proposé des niveaux de complexité 1 ou 2 pour la troisième séance. La figure \ref{figCmp4} montre que le système initial est très lent à passer d'un niveau à l'autre. La figure \ref{figCmp3} permet de voir la première transition du système initial ne réagissant qu'aux notes obtenues dans les séances précédentes et non à celles de la séance en cours. Dans ce cas, l'algorithme de recommandation déterministe adopte la même stratégie et propose un changement brutal à tous les apprenants autour de la cinquième question. L'algorithme stochastique continue avec des changements progressifs tout en privilégiant le niveau 2.
198 198
\begin{figure} 199 199 \begin{figure}
\includegraphics[width=\textwidth]{./Figures/comp4.png} 200 200 \includegraphics[width=\textwidth]{./Figures/comp4.png}
\caption{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la troisième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)} 201 201 \caption{Niveaux de complexité des questions posées aux apprenants par les trois systèmes testés lors de la troisième séance. Gauche - RàPC, centre - recommandation déterministe (DM), droite - moyenne de 100 exécutions de recommandation stochastique (SM)}
\label{figCmp4} 202 202 \label{figCmp4}
\end{figure} 203 203 \end{figure}
204 204
Pour comparer numériquement le système initial, l'algorithme déterministe et l'algorithme de recommandation proposé, un ensemble d'équations a été défini (équation \ref{eqMetric1} et équation \ref{eqMetric2}). Celles-ci permettent de décrire le système de recommandation idéal si l'objectif de l'apprenant est de suivre un apprentissage standard. Une valeur est calculée pour chaque niveau de complexité en fonction de la moyenne des notes et du nombre de questions recommandées dans ce niveau de complexité. L'objectif de cette mesure est d'attribuer un score élevé aux systèmes de recommandation qui proposent plus d'exercices au niveau de complexité où l'apprenant a obtenu une note moyenne plus basse, lui permettant ainsi de renforcer ses connaissances pour ce niveau de complexité. De la même manière, il est attendu que le système de recommandation propose moins d'exercices aux niveaux de complexité pour lesquels les notes moyennes sont élevées, l'étudiant ayant acquis des connaissances suffisantes à ces niveaux de complexité. Les scores faibles sont attribués aux systèmes qui recommandent peu d'exercices à des niveaux de complexité dont les notes moyennes sont faibles et, inversement, s'ils proposent beaucoup d'exercices à des niveaux de complexité dont les notes moyennes sont élevées. 205 205 Pour comparer numériquement le système initial, l'algorithme déterministe et l'algorithme de recommandation proposé, un ensemble d'équations a été défini (équation \ref{eqMetric1} et équation \ref{eqMetric2}). Celles-ci permettent de décrire le système de recommandation idéal si l'objectif de l'apprenant est de suivre un apprentissage standard. Une valeur est calculée pour chaque niveau de complexité en fonction de la moyenne des notes et du nombre de questions recommandées dans ce niveau de complexité. L'objectif de cette mesure est d'attribuer un score élevé aux systèmes de recommandation qui proposent plus d'exercices au niveau de complexité où l'apprenant a obtenu une note moyenne plus basse, lui permettant ainsi de renforcer ses connaissances pour ce niveau de complexité. De la même manière, il est attendu que le système de recommandation propose moins d'exercices aux niveaux de complexité pour lesquels les notes moyennes sont élevées, l'étudiant ayant acquis des connaissances suffisantes à ces niveaux de complexité. Les scores faibles sont attribués aux systèmes qui recommandent peu d'exercices à des niveaux de complexité dont les notes moyennes sont faibles et, inversement, s'ils proposent beaucoup d'exercices à des niveaux de complexité dont les notes moyennes sont élevées.
206 206
\begin{equation} 207 207 \begin{equation}
%r_c=x+y-2xy 208 208 %r_c=x+y-2xy
%r_c=x^2+y^2-2x^2y^2 209 209 %r_c=x^2+y^2-2x^2y^2
rp_c(x)=e^{-2(x_{0,c}+x_{1,c}-1)^2} ; \{x \in \mathbb{R}^2 | 0<=x<=1\} 210 210 rp_c(x)=e^{-2(x_{0,c}+x_{1,c}-1)^2} ; \{x \in \mathbb{R}^2 | 0<=x<=1\}
\label{eqMetric1} 211 211 \label{eqMetric1}
\end{equation} 212 212 \end{equation}
213 213
\begin{equation} 214 214 \begin{equation}
r=\sum_{c=0}^{c_n-1} rp_c 215 215 r=\sum_{c=0}^{c_n-1} rp_c
\label{eqMetric2} 216 216 \label{eqMetric2}
\end{equation} 217 217 \end{equation}
218 218
Les propriétés de la métrique sont : 219 219 Les propriétés de la métrique sont :
\begin{itemize} 220 220 \begin{itemize}
\item $\{\forall x \in \mathbb{R}^2 | 0<=x<=1\}, rp_c(x)>0$ 221 221 \item $\{\forall x \in \mathbb{R}^2 | 0<=x<=1\}, rp_c(x)>0$
\item $max(rp_c(x))=1; \; if \; x_{0,c}+x_{1,c}=1$ 222 222 \item $max(rp_c(x))=1; \; if \; x_{0,c}+x_{1,c}=1$
\item $min(rp_c(x))=0.1353; \; if \; \left ( \sum_{i=1}^2 x_{i,c}=0 \; \lor \; \sum_{i=1}^2 x_{i,c} = 2 \right )$\\ 223 223 \item $min(rp_c(x))=0.1353; \; if \; \left ( \sum_{i=1}^2 x_{i,c}=0 \; \lor \; \sum_{i=1}^2 x_{i,c} = 2 \right )$\\
\end{itemize} 224 224 \end{itemize}
225 225
Dans l'équation \ref{eqMetric1}, $x_{0,c}$ est la moyenne normalisée des notes dans le niveau de complexité $c$ (équation \ref{eqXc}), et $x_{1,c}$ est le nombre normalisé de questions auxquelles des réponses ont été données dans le niveau de complexité $c$ (équation \ref{eqYc}). Ainsi, plus la valeur de $r$ est élevée, meilleure est la recommandation. 226 226 Dans l'équation \ref{eqMetric1}, $x_{0,c}$ est la moyenne normalisée des notes dans le niveau de complexité $c$ (équation \ref{eqXc}), et $x_{1,c}$ est le nombre normalisé de questions auxquelles des réponses ont été données dans le niveau de complexité $c$ (équation \ref{eqYc}). Ainsi, plus la valeur de $r$ est élevée, meilleure est la recommandation.
227 227
\begin{equation} 228 228 \begin{equation}
x_{0,c}=\frac{<g_c>_{G_c}}{g_m} 229 229 x_{0,c}=\frac{<g_c>_{G_c}}{g_m}
\label{eqXc} 230 230 \label{eqXc}
\end{equation} 231 231 \end{equation}
232 232
\begin{equation} 233 233 \begin{equation}
x_{1,c}=\frac{ny_c}{n_c} 234 234 x_{1,c}=\frac{ny_c}{n_c}
\label{eqYc} 235 235 \label{eqYc}
\end{equation} 236 236 \end{equation}
237 237
La figure \ref{figMetric} représente la fonction $rp_c(x)$. La valeur maximale de $r$ dans un niveau de complexité spécifique étant égale à $1$, la valeur maximale globale pour les scénarios testés est égale à 5. 238 238 La figure \ref{figMetric} représente la fonction $rp_c(x)$. La valeur maximale de $r$ dans un niveau de complexité spécifique étant égale à $1$, la valeur maximale globale pour les scénarios testés est égale à 5.
239 239
\begin{figure} 240 240 \begin{figure}
\includegraphics[width=\textwidth]{./Figures/metric.png} 241 241 \includegraphics[width=\textwidth]{./Figures/metric.png}
\caption{Fonction d'évaluation de la qualité de la recommandation pour un parcours standard} 242 242 \caption{Fonction d'évaluation de la qualité de la recommandation pour un parcours standard}
\label{figMetric} 243 243 \label{figMetric}
\end{figure} 244 244 \end{figure}
245 245
Les résultats des calculs de la métrique $rp_c(x)$ établie pour le système initial et les deux algorithmes dans les trois scénarios testés sont présentés dans le tableau \ref{tabRM}. 246 246 Les résultats des calculs de la métrique $rp_c(x)$ établie pour le système initial et les deux algorithmes dans les trois scénarios testés sont présentés dans le tableau \ref{tabRM}.
247 247
\begin{table}[!ht] 248 248 \begin{table}[!ht]
\centering 249 249 \centering
\begin{tabular}{cccccccc} 250 250 \begin{tabular}{cccccccc}
&$c_0$&$c_1$&$c_2$&$c_3$&$c_4$&Total ($r$)&Total ($\%$)\\ 251 251 &$c_0$&$c_1$&$c_2$&$c_3$&$c_4$&Total ($r$)&Total ($\%$)\\
\hline 252 252 \hline
Test 1\\ 253 253 Test 1\\
\hline 254 254 \hline
RàPC&0.5388&-&-&-&-&0.5388&10.776\\ 255 255 RàPC&0.5388&-&-&-&-&0.5388&10.776\\
DM&0.8821&0.7282&\textbf{0.9072}&\textbf{0.8759}&-&3.3934&67.868\\ 256 256 DM&0.8821&0.7282&\textbf{0.9072}&\textbf{0.8759}&-&3.3934&67.868\\
SM&\textbf{0.9463}&\textbf{0.8790}&0.7782&0.7108&0.6482&\textbf{3.9625}&\textbf{79.25}\\ 257 257 SM&\textbf{0.9463}&\textbf{0.8790}&0.7782&0.7108&0.6482&\textbf{3.9625}&\textbf{79.25}\\
\hline 258 258 \hline
Test 2\\ 259 259 Test 2\\
\hline 260 260 \hline
RàPC&0.9445&\textbf{0.9991}&-&-&-&1.9436&38.872\\ 261 261 RàPC&0.9445&\textbf{0.9991}&-&-&-&1.9436&38.872\\
DM&-&0.9443&\textbf{0.8208}&\textbf{0.9623}&-&2.7274&54.548\\ 262 262 DM&-&0.9443&\textbf{0.8208}&\textbf{0.9623}&-&2.7274&54.548\\
SM&\textbf{0.9688}&0.9861&0.8067&0.7161&0.6214&\textbf{4.0991}&\textbf{81.982}\\ 263 263 SM&\textbf{0.9688}&0.9861&0.8067&0.7161&0.6214&\textbf{4.0991}&\textbf{81.982}\\
\hline 264 264 \hline
Test3\\ 265 265 Test3\\
\hline 266 266 \hline
RàPC&-&0.8559&0.7377&-&-&1.5936&31.872 267 267 RàPC&-&0.8559&0.7377&-&-&1.5936&31.872
\\ 268 268 \\
DM&-&-&0.5538&\textbf{0.7980}&-&1.3518&27.036\\ 269 269 DM&-&-&0.5538&\textbf{0.7980}&-&1.3518&27.036\\
SM&0.9089&\textbf{0.9072}&\textbf{0.9339}&0.7382&0.6544&\textbf{4.1426}&\textbf{82.852}\\ 270 270 SM&0.9089&\textbf{0.9072}&\textbf{0.9339}&0.7382&0.6544&\textbf{4.1426}&\textbf{82.852}\\
\end{tabular} 271 271 \end{tabular}
\caption{Résultats de la métrique $rp_c(x)$ (RàPC - Système sans module de recommandation, DM - Module de recommandation déterministe, SM - Module de recommandation stochastique)} 272 272 \caption{Résultats de la métrique $rp_c(x)$ (RàPC - Système sans module de recommandation, DM - Module de recommandation déterministe, SM - Module de recommandation stochastique)}
\label{tabRM} 273 273 \label{tabRM}
\end{table} 274 274 \end{table}
275 275
Les équations \ref{eqMetricS1} et \ref{eqMetricS2} permettent de caractériser un apprentissage progressif. Dans ce cas, un score élevé est attribué aux systèmes proposant plus d'exercices dans un niveau de complexité où les notes moyennes sont légèrement insuffisantes (4/10), plus flexibles avec des notes moyennes plus basses, et un petit nombre d'exercices pour des notes moyennes élevées. Les scores faibles sont attribués aux systèmes qui recommandent de nombreuses questions dans un niveau de complexité avec des notes moyennes élevées ou faibles. 276 276 Les équations \ref{eqMetricS1} et \ref{eqMetricS2} permettent de caractériser un apprentissage progressif. Dans ce cas, un score élevé est attribué aux systèmes proposant plus d'exercices dans un niveau de complexité où les notes moyennes sont légèrement insuffisantes (4/10), plus flexibles avec des notes moyennes plus basses, et un petit nombre d'exercices pour des notes moyennes élevées. Les scores faibles sont attribués aux systèmes qui recommandent de nombreuses questions dans un niveau de complexité avec des notes moyennes élevées ou faibles.
277 277
\begin{equation} 278 278 \begin{equation}
rs_c(x)=e^{-\frac{2}{100}(32x_{0,c}^2-28x_{0,c}+10x_{1,c}-4)^2} ; \{x \in \mathbb{R}^2 | 0<=x<=1\} 279 279 rs_c(x)=e^{-\frac{2}{100}(32x_{0,c}^2-28x_{0,c}+10x_{1,c}-4)^2} ; \{x \in \mathbb{R}^2 | 0<=x<=1\}
\label{eqMetricS1} 280 280 \label{eqMetricS1}
\end{equation} 281 281 \end{equation}
282 282
\begin{equation} 283 283 \begin{equation}
r=\sum_{c=0}^{c_n-1} rs_c 284 284 r=\sum_{c=0}^{c_n-1} rs_c
\label{eqMetricS2} 285 285 \label{eqMetricS2}
\end{equation} 286 286 \end{equation}
287 287
Les propriétés de la métrique sont : 288 288 Les propriétés de la métrique sont :
\begin{itemize} 289 289 \begin{itemize}
\item $\{\forall x \in \mathbb{R}^2 | 0<=x<=1\}, rs_c(x)>0$ 290 290 \item $\{\forall x \in \mathbb{R}^2 | 0<=x<=1\}, rs_c(x)>0$
\item $max(rs_c(x))=1; \; if \; 16x_{0,c}^2-14x_{0,c}+5x_{1,c}-2=0$\\ 291 291 \item $max(rs_c(x))=1; \; if \; 16x_{0,c}^2-14x_{0,c}+5x_{1,c}-2=0$\\
\end{itemize} 292 292 \end{itemize}
293 293
La figure \ref{figMetric2} représente la fonction $rs_c(x)$. Comme pour $rp_c$, la valeur maximale de $r$ dans un niveau de complexité spécifique étant égale à $1$, la valeur maximale globale pour les scénarios testés est égale à 5. 294 294 La figure \ref{figMetric2} représente la fonction $rs_c(x)$. Comme pour $rp_c$, la valeur maximale de $r$ dans un niveau de complexité spécifique étant égale à $1$, la valeur maximale globale pour les scénarios testés est égale à 5.
295 295
Les résultats du calcul des métriques pour le système initial et les deux algorithmes dans les trois scénarios définis sont présentés dans le tableau \ref{tabRM2}. 296 296 Les résultats du calcul des métriques pour le système initial et les deux algorithmes dans les trois scénarios définis sont présentés dans le tableau \ref{tabRM2}.
297 297
\begin{figure}[!ht] 298 298 \begin{figure}[!ht]
\centering 299 299 \centering
\includegraphics[width=\textwidth]{./Figures/metric2.png} 300 300 \includegraphics[width=\textwidth]{./Figures/metric2.png}
\caption{Fonction d'évaluation de la qualité de la recommandation pour un apprentissage progressif.} 301 301 \caption{Fonction d'évaluation de la qualité de la recommandation pour un apprentissage progressif.}
\label{figMetric2} 302 302 \label{figMetric2}
\end{figure} 303 303 \end{figure}
304 304
\begin{table}[!ht] 305 305 \begin{table}[!ht]
\centering 306 306 \centering
\begin{tabular}{cccccccc} 307 307 \begin{tabular}{cccccccc}
&$c_0$&$c_1$&$c_2$&$c_3$&$c_4$&Total ($r$)&Total ($\%$)\\ 308 308 &$c_0$&$c_1$&$c_2$&$c_3$&$c_4$&Total ($r$)&Total ($\%$)\\
\hline 309 309 \hline
Séance 1\\ 310 310 Séance 1\\
\hline 311 311 \hline
RàPC&\textbf{0.9979}&-&-&-&-&0.9979&19.96\\ 312 312 RàPC&\textbf{0.9979}&-&-&-&-&0.9979&19.96\\
DM&0.8994&0.1908&\textbf{0.3773}&\textbf{0.2990}&-&1.7665&35.33\\ 313 313 DM&0.8994&0.1908&\textbf{0.3773}&\textbf{0.2990}&-&1.7665&35.33\\
SM&0.8447&\textbf{0.3012}&0.2536&0.2030&\textbf{0.1709}&\textbf{1.7734}&\textbf{35.47}\\ 314 314 SM&0.8447&\textbf{0.3012}&0.2536&0.2030&\textbf{0.1709}&\textbf{1.7734}&\textbf{35.47}\\
\hline 315 315 \hline
Séance 2\\ 316 316 Séance 2\\
\hline 317 317 \hline
RàPC&\textbf{0.4724}&\textbf{0.7125}&-&-&-&1.1849&23.70\\ 318 318 RàPC&\textbf{0.4724}&\textbf{0.7125}&-&-&-&1.1849&23.70\\
DM&-&0.6310&\textbf{0.3901}&\textbf{0.4253}&-&1.4464&28.93\\ 319 319 DM&-&0.6310&\textbf{0.3901}&\textbf{0.4253}&-&1.4464&28.93\\
SM&0.2697&0.7089&0.2634&0.2026&\textbf{0.1683}&\textbf{1.6129}&\textbf{32.26}\\ 320 320 SM&0.2697&0.7089&0.2634&0.2026&\textbf{0.1683}&\textbf{1.6129}&\textbf{32.26}\\
\hline 321 321 \hline
Séance 3\\ 322 322 Séance 3\\
\hline 323 323 \hline
RàPC&-&\textbf{0.9179}&0.2692&-&-&1.1871&23.74 324 324 RàPC&-&\textbf{0.9179}&0.2692&-&-&1.1871&23.74
\\ 325 325 \\
DM&-&-&0.2236&\textbf{0.9674}&-&1.191&23.82\\ 326 326 DM&-&-&0.2236&\textbf{0.9674}&-&1.191&23.82\\
SM&0.1873&0.3038&\textbf{0.6345}&0.2394&\textbf{0.1726}&\textbf{1.5376}&\textbf{30.75}\\ 327 327 SM&0.1873&0.3038&\textbf{0.6345}&0.2394&\textbf{0.1726}&\textbf{1.5376}&\textbf{30.75}\\
\end{tabular} 328 328 \end{tabular}
\caption{Évaluation des recommandations proposées selon $rs_c(x)$ par les différents systèmes de recommandation testés : RàPC - Système sans module de recommandation, DM - Algorithme deterministique, SM - Algorithme stochastique} 329 329 \caption{Évaluation des recommandations proposées selon $rs_c(x)$ par les différents systèmes de recommandation testés : RàPC - Système sans module de recommandation, DM - Algorithme deterministique, SM - Algorithme stochastique}
\label{tabRM2} 330 330 \label{tabRM2}
\end{table} 331 331 \end{table}
332 332
En complément, le tableau \ref{tabCS} présente les similarités entre toutes les recommandations faites aux apprenants par les trois systèmes et les trois séances d'entrainement. Pour ce faire, nous avons choisi d'appliquer l'équation \ref{eqCS} permettant de calculer une similarité cosinus entre deux vecteurs $A$ et $B$. 333 333 En complément, le tableau \ref{tabCS} présente les similarités entre toutes les recommandations faites aux apprenants par les trois systèmes et les trois séances d'entrainement. Pour ce faire, nous avons choisi d'appliquer l'équation \ref{eqCS} permettant de calculer une similarité cosinus entre deux vecteurs $A$ et $B$.
334 334
\begin{equation} 335 335 \begin{equation}
sc=\frac{\sum_{i=1}^n A_i B_i}{\sqrt{\sum_{i=1}^n A_i^2} \sqrt{\sum_{i=1}^n B_i^2}} 336 336 sc=\frac{\sum_{i=1}^n A_i B_i}{\sqrt{\sum_{i=1}^n A_i^2} \sqrt{\sum_{i=1}^n B_i^2}}
\label{eqCS} 337 337 \label{eqCS}
\end{equation} 338 338 \end{equation}
339 339
\begin{table}[!ht] 340 340 \begin{table}[!ht]
\centering 341 341 \centering
\begin{tabular}{cccc} 342 342 \begin{tabular}{cccc}
Système de recommandation & Séance 1 & Séance 2 & Séance 3\\ 343 343 Système de recommandation & Séance 1 & Séance 2 & Séance 3\\
\hline 344 344 \hline
RàPC&1&1&1\\ 345 345 RàPC&1&1&1\\
DM&0.9540&0.9887&0.9989\\ 346 346 DM&0.9540&0.9887&0.9989\\
SM&\textbf{0.8124}&\textbf{0.8856}&\textbf{0.9244}\\ 347 347 SM&\textbf{0.8124}&\textbf{0.8856}&\textbf{0.9244}\\
\end{tabular} 348 348 \end{tabular}
\caption{Moyenne de la diversité des propositions pour tous les apprenants. Une valeur plus faible représente une plus grande diversité. (RàPC - Système sans module de recommandation, DM - Module deterministe, SM - Module stochastique)} 349 349 \caption{Moyenne de la diversité des propositions pour tous les apprenants. Une valeur plus faible représente une plus grande diversité. (RàPC - Système sans module de recommandation, DM - Module deterministe, SM - Module stochastique)}
\label{tabCS} 350 350 \label{tabCS}
\end{table} 351 351 \end{table}
352 352
\subsection{Discussion et Conclusion} 353 353 \subsection{Discussion et Conclusion}
Avec la génération d'exercices par le système de RàPC initial, AI-VT propose les mêmes exercices à tous les apprenants, et l'évolution des niveaux de complexité est très lente, un changement toutes les trois ou quatre séances environ. En effet, le système ne prend pas en compte les notes obtenues pendant la séance. Les systèmes intégrant l'un des modules de recommandation testés sont plus dynamiques et les évolutions sont plus rapides. En considérant les notes des apprenants, l'algorithme déterministe suggère des changements de niveaux à un grand nombre d'apprenants de manière soudaine, tandis que l'algorithme stochastique est plus axé sur la personnalisation individuelle et les changements de niveau de complexité sont produits pour un petit nombre d'apprenants. Les deux modules de recommandation proposés ont la capacité de détecter les faiblesses des apprenants et d'adapter la séance à leurs besoins particuliers. 354 354 Avec la génération d'exercices par le système de RàPC initial, AI-VT propose les mêmes exercices à tous les apprenants, et l'évolution des niveaux de complexité est très lente, un changement toutes les trois ou quatre séances environ. En effet, le système ne prend pas en compte les notes obtenues pendant la séance. Les systèmes intégrant l'un des modules de recommandation testés sont plus dynamiques et les évolutions sont plus rapides. En considérant les notes des apprenants, l'algorithme déterministe suggère des changements de niveaux à un grand nombre d'apprenants de manière soudaine, tandis que l'algorithme stochastique est plus axé sur la personnalisation individuelle et les changements de niveau de complexité sont produits pour un petit nombre d'apprenants. Les deux modules de recommandation proposés ont la capacité de détecter les faiblesses des apprenants et d'adapter la séance à leurs besoins particuliers.
355 355
Les données générées ont permis de simuler diverses situations avec les notes de mille apprenants, permettant ainsi d'évaluer le comportement des systèmes de recommandation avec différentes configurations. 356 356 Les données générées ont permis de simuler diverses situations avec les notes de mille apprenants, permettant ainsi d'évaluer le comportement des systèmes de recommandation avec différentes configurations.
357 357
Les résultats numériques montrent que les distributions des questions dans une séance par les deux modules de recommandation sont différentes bien que la tendance générale soit similaire. Les modules de recommandation proposés tentent de répartir les questions dans tous les niveaux de complexité définis. Globalement, le module de recommandation stochastique a obtenu un meilleur score. En comparaison du système initial, les modules de recommandation (déterministe et stochastique) proposent 15\% à 68\% d'adaptations de la complexité pour tous les niveaux. Pour cette raison, l'approche stochastique sera préférée à l'approche déterministe dans la suite des travaux de recherche. 358 358 Les résultats numériques montrent que les distributions des questions dans une séance par les deux modules de recommandation sont différentes bien que la tendance générale soit similaire. Les modules de recommandation proposés tentent de répartir les questions dans tous les niveaux de complexité définis. Globalement, le module de recommandation stochastique a obtenu un meilleur score. En comparaison du système initial, les modules de recommandation (déterministe et stochastique) proposent 15\% à 68\% d'adaptations de la complexité pour tous les niveaux. Pour cette raison, l'approche stochastique sera préférée à l'approche déterministe dans la suite des travaux de recherche.
359 359
Selon la métrique de la similarité cosinus, le module de recommandation stochastique augmente la diversité des propositions par rapport au système initial dans les trois séances d'entrainement testées, ce qui indique qu'en plus d'atteindre l'adaptabilité, des propositions personnalisées sont générées tout en maintenant l'objectif de progression des niveaux de compétence des apprenants. La diversité des propositions est une caractéristique essentielle de l'algorithme de recommandation dans ses deux versions. 360 360 Selon la métrique de la similarité cosinus, le module de recommandation stochastique augmente la diversité des propositions par rapport au système initial dans les trois séances d'entrainement testées, ce qui indique qu'en plus d'atteindre l'adaptabilité, des propositions personnalisées sont générées tout en maintenant l'objectif de progression des niveaux de compétence des apprenants. La diversité des propositions est une caractéristique essentielle de l'algorithme de recommandation dans ses deux versions.
361 361
Les modules de recommandation sont un élément essentiel pour certains EIAH car ils aident à guider le processus d'apprentissage individuel. Ils permettent également d'identifier les faiblesses et de réorienter le processus complet afin d'améliorer les connaissances et les compétences. Les deux modules de recommandation proposés peuvent détecter en temps réel les faiblesses de l'apprenant et tentent de réorienter la séance vers le niveau de complexité le plus adapté. Même si l'ensemble des données générées est une simulation de temps de réponse et de notes fictives d'apprenants fictifs, les tests démontrent la flexibilité et la robustesse des modules de recommandation proposés : les données relatives aux apprenants présentent en effet une grande diversité et obligent le système à s'adapter à différents types de configuration. Par conséquent, il est possible de conclure que les modules de recommandation proposés ont la capacité de fonctionner dans différentes situations et de proposer des chemins alternatifs et personnalisés pour améliorer le processus d'apprentissage global. 362 362 Les modules de recommandation sont un élément essentiel pour certains EIAH car ils aident à guider le processus d'apprentissage individuel. Ils permettent également d'identifier les faiblesses et de réorienter le processus complet afin d'améliorer les connaissances et les compétences. Les deux modules de recommandation proposés peuvent détecter en temps réel les faiblesses de l'apprenant et tentent de réorienter la séance vers le niveau de complexité le plus adapté. Même si l'ensemble des données générées est une simulation de temps de réponse et de notes fictives d'apprenants fictifs, les tests démontrent la flexibilité et la robustesse des modules de recommandation proposés : les données relatives aux apprenants présentent en effet une grande diversité et obligent le système à s'adapter à différents types de configuration. Par conséquent, il est possible de conclure que les modules de recommandation proposés ont la capacité de fonctionner dans différentes situations et de proposer des chemins alternatifs et personnalisés pour améliorer le processus d'apprentissage global.
363 363
\section{ESCBR-SMA et échantillonnage de Thompson} 364 364 \section{ESCBR-SMA et échantillonnage de Thompson}
\sectionmark{ESCBR-SMA et TS} 365 365 \sectionmark{ESCBR-SMA et TS}
366 366
La section précédente a démontré l'intérêt de l'intégration d'un module de recommandation afin de proposer des exercices d'un niveau de difficulté adapté aux besoins de l'apprenant en fonction des difficultés décelées au cours de la séance d'entraînement. Le système AI-VT initial fondé sur le cycle du raisonnement à partir de cas et ne proposant que des adaptations entre deux séances d’entraînement consécutives, a été supplanté par l'intégration de modules de recommandation utilisés durant la phase de révision du cycle classique du RàPC. Les deux modules de recommandation testés dans la section précédente étaient l'un déterministe, l'autre stochastique. 367 367 La section précédente a démontré l'intérêt de l'intégration d'un module de recommandation afin de proposer des exercices d'un niveau de difficulté adapté aux besoins de l'apprenant en fonction des difficultés décelées au cours de la séance d'entraînement. Le système AI-VT initial fondé sur le cycle du raisonnement à partir de cas et ne proposant que des adaptations entre deux séances d’entraînement consécutives, a été supplanté par l'intégration de modules de recommandation utilisés durant la phase de révision du cycle classique du RàPC. Les deux modules de recommandation testés dans la section précédente étaient l'un déterministe, l'autre stochastique.
368 368
La section précédente a également démontré qu'il était possible et intéressant que les niveaux de complexité des exercices proposés puissent suivre des fonctions permettant de les faire fluctuer de manière progressive au cours de la séance, et ce afin que les apprenants ne soient pas confrontés à des difficultés changeant de manière trop abrupte durant l'entraînement. Cette étude nous amène donc à considérer la résolution de la génération d'une séance d'exercices sous l'angle de la régression. Nous proposons donc dans cette partie de montrer de quelle manière nous avons intégré et vérifié l'intérêt des outils définis dans le chapitre précédent dans l'EIAH AI-VT. 369 369 La section précédente a également démontré qu'il était possible et intéressant que les niveaux de complexité des exercices proposés puissent suivre des fonctions permettant de les faire fluctuer de manière progressive au cours de la séance, et ce afin que les apprenants ne soient pas confrontés à des difficultés changeant de manière trop abrupte durant l'entraînement. Cette étude nous amène donc à considérer la résolution de la génération d'une séance d'exercices sous l'angle de la régression. Nous proposons donc dans cette partie de montrer de quelle manière nous avons intégré et vérifié l'intérêt des outils définis dans le chapitre précédent dans l'EIAH AI-VT.
370 370
\subsection{Concepts Associés} 371 371 \subsection{Concepts Associés}
372 372
Cette section présente les concepts, les définitions et les algorithmes nécessaires à la compréhension du module proposé. Le paradigme fondamental utilisé dans ce travail est le raisonnement à partir de cas (RàPC), qui permet d'exploiter les connaissances acquises et l'expérience accumulée pour résoudre un problème spécifique. L'idée principale est de rechercher des situations antérieures similaires et d'utiliser l'expérience acquise pour résoudre de nouveaux problèmes. Le RàPC suit classiquement un cycle de quatre étapes pour améliorer la solution d'inférence \cite{jmse11050890}. 373 373 Cette section présente les concepts, les définitions et les algorithmes nécessaires à la compréhension du module proposé. Le paradigme fondamental utilisé dans ce travail est le raisonnement à partir de cas (RàPC), qui permet d'exploiter les connaissances acquises et l'expérience accumulée pour résoudre un problème spécifique. L'idée principale est de rechercher des situations antérieures similaires et d'utiliser l'expérience acquise pour résoudre de nouveaux problèmes. Le RàPC suit classiquement un cycle de quatre étapes pour améliorer la solution d'inférence \cite{jmse11050890}.
374 374
L'un des algorithmes les plus couramment utilisés dans les EIAH pour adapter le contenu et estimer la progression du niveau de connaissance des apprenants est le BKT (\textit{Bayesian Knowledge Tracing}) \cite{ZHANG2018189}. Cet algorithme utilise quatre paramètres pour estimer la progression des connaissances. $P(k)$ estime la probabilité de connaissance dans une compétence spécifique. $P(w)$, est la probabilité que l'apprenant démontre ses connaissances. $P(s)$, est la probabilité que l'apprenant fasse une erreur. $P(g)$, est la probabilité que l'apprenant ait deviné une réponse. La valeur estimée de la connaissance est mise à jour selon les équations \ref{eqbkt1}, \ref{eqbkt2} et \ref{eqbkt3}. Si la réponse de l'apprenant est correcte, l'équation \ref{eqbkt1} est utilisée, mais si la réponse est incorrecte, l'équation \ref{eqbkt2} est utilisée. 375 375 L'un des algorithmes les plus couramment utilisés dans les EIAH pour adapter le contenu et estimer la progression du niveau de connaissance des apprenants est le BKT (\textit{Bayesian Knowledge Tracing}) \cite{ZHANG2018189}. Cet algorithme utilise quatre paramètres pour estimer la progression des connaissances. $P(k)$ estime la probabilité de connaissance dans une compétence spécifique. $P(w)$, est la probabilité que l'apprenant démontre ses connaissances. $P(s)$, est la probabilité que l'apprenant fasse une erreur. $P(g)$, est la probabilité que l'apprenant ait deviné une réponse. La valeur estimée de la connaissance est mise à jour selon les équations \ref{eqbkt1}, \ref{eqbkt2} et \ref{eqbkt3}. Si la réponse de l'apprenant est correcte, l'équation \ref{eqbkt1} est utilisée, mais si la réponse est incorrecte, l'équation \ref{eqbkt2} est utilisée.
376 376
\begin{equation} 377 377 \begin{equation}
P(k_{t-1}|Correct_t)=\frac{P(k_{t-1})(1-P(s))}{P(k_{t-1})(1-P(s))+(1-P(k_{t-1}))P(g)} 378 378 P(k_{t-1}|Correct_t)=\frac{P(k_{t-1})(1-P(s))}{P(k_{t-1})(1-P(s))+(1-P(k_{t-1}))P(g)}
\label{eqbkt1} 379 379 \label{eqbkt1}
\end{equation} 380 380 \end{equation}
381 381
\begin{equation} 382 382 \begin{equation}
P(k_{t-1}|Incorrect_t)=\frac{P(k_{t-1})P(s)}{P(k_{t-1})(P(s))+(1-P(k_{t-1}))(1-P(g))} 383 383 P(k_{t-1}|Incorrect_t)=\frac{P(k_{t-1})P(s)}{P(k_{t-1})(P(s))+(1-P(k_{t-1}))(1-P(g))}
\label{eqbkt2} 384 384 \label{eqbkt2}
\end{equation} 385 385 \end{equation}
386 386
\begin{equation} 387 387 \begin{equation}
P(k_{t})=P(k_{t-1}|evidence_t)+(1-P(k_{t-1}|evidence_t))P(w) 388 388 P(k_{t})=P(k_{t-1}|evidence_t)+(1-P(k_{t-1}|evidence_t))P(w)
\label{eqbkt3} 389 389 \label{eqbkt3}
\end{equation} 390 390 \end{equation}
391 391
Le module de recommandation proposé, associé à AI-VT, est fondé sur le paradigme de l'apprentissage par renforcement. L'apprentissage par renforcement est une technique d'apprentissage automatique qui permet, par le biais d'actions et de récompenses, d'améliorer les connaissances du système sur une tâche spécifique \cite{NEURIPS2023_9d8cf124}. Nous nous intéressons ici plus particulièrement à l'échantillonnage de Thompson, qui, par le biais d'une distribution de probabilité initiale (distribution a priori) et d'un ensemble de règles de mise à jour prédéfinies, peut adapter et améliorer les estimations initiales d'un processus \cite{pmlr-v238-ou24a}. La distribution de probabilité initiale est généralement définie comme une distribution spécifique de la famille des distributions Beta (équation \ref{fbeta}) avec des valeurs initiales prédéterminées pour $\alpha$ et $\beta$ \cite{math12111758}, \cite{NGUYEN2024111566}. 392 392 Le module de recommandation proposé, associé à AI-VT, est fondé sur le paradigme de l'apprentissage par renforcement. L'apprentissage par renforcement est une technique d'apprentissage automatique qui permet, par le biais d'actions et de récompenses, d'améliorer les connaissances du système sur une tâche spécifique \cite{NEURIPS2023_9d8cf124}. Nous nous intéressons ici plus particulièrement à l'échantillonnage de Thompson, qui, par le biais d'une distribution de probabilité initiale (distribution a priori) et d'un ensemble de règles de mise à jour prédéfinies, peut adapter et améliorer les estimations initiales d'un processus \cite{pmlr-v238-ou24a}. La distribution de probabilité initiale est généralement définie comme une distribution spécifique de la famille des distributions Beta (équation \ref{fbeta}) avec des valeurs initiales prédéterminées pour $\alpha$ et $\beta$ \cite{math12111758}, \cite{NGUYEN2024111566}.
393 393
%\begin{equation} 394 394 %\begin{equation}
% Beta(x,\alpha,\beta)=\begin{cases} 395 395 % Beta(x,\alpha,\beta)=\begin{cases}
% \frac{(x^{\alpha -1})(1-x)^{\beta -1}}{\int_0^1(u^{\alpha -1})(1-u)^{\beta -1} du}&x \in [0, 1]\\ 396 396 % \frac{(x^{\alpha -1})(1-x)^{\beta -1}}{\int_0^1(u^{\alpha -1})(1-u)^{\beta -1} du}&x \in [0, 1]\\
% 0&otherwise 397 397 % 0&otherwise
% \end{cases} 398 398 % \end{cases}
%\end{equation} 399 399 %\end{equation}
400 400
\begin{equation} 401 401 \begin{equation}
Beta(\theta | \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)}\theta^{\alpha-1}(1-\theta)^{\beta-1} 402 402 Beta(\theta | \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)}\theta^{\alpha-1}(1-\theta)^{\beta-1}
\label{fbeta} 403 403 \label{fbeta}
\end{equation} 404 404 \end{equation}
405 405
En utilisant la définition formelle de la fonction $\Gamma$ (équation \ref{eqGamma1}) et en remplaçant certaines variables, une nouvelle expression de la fonction Beta est obtenue (équation \ref{f2beta}). 406 406 En utilisant la définition formelle de la fonction $\Gamma$ (équation \ref{eqGamma1}) et en remplaçant certaines variables, une nouvelle expression de la fonction Beta est obtenue (équation \ref{f2beta}).
407 407
\begin{equation} 408 408 \begin{equation}
\Gamma(z)=\int_0^\infty e^{-x} x^{z-1} dx 409 409 \Gamma(z)=\int_0^\infty e^{-x} x^{z-1} dx
\label{eqGamma1} 410 410 \label{eqGamma1}
\end{equation} 411 411 \end{equation}
412 412
\begin{equation} 413 413 \begin{equation}
Beta(\theta | \alpha, \beta) = \frac{\int_0^\infty e^{-s} s^{\alpha+\beta-1}ds}{\int_0^\infty e^{-u} u^{\alpha-1}du\int_0^\infty e^{-v} v^{\beta-1}dv}\theta^{\alpha-1}(1-\theta)^{\beta-1} 414 414 Beta(\theta | \alpha, \beta) = \frac{\int_0^\infty e^{-s} s^{\alpha+\beta-1}ds}{\int_0^\infty e^{-u} u^{\alpha-1}du\int_0^\infty e^{-v} v^{\beta-1}dv}\theta^{\alpha-1}(1-\theta)^{\beta-1}
\label{f2beta} 415 415 \label{f2beta}
\end{equation} 416 416 \end{equation}
417 417
En exprimant les deux intégrales du dénominateur comme une seule intégrale, l'équation \ref{f3Beta} est obtenue. 418 418 En exprimant les deux intégrales du dénominateur comme une seule intégrale, l'équation \ref{f3Beta} est obtenue.
419 419
\begin{equation} 420 420 \begin{equation}
\int_{u=0}^{\infty}\int_{v=0}^\infty e^{-u-v} u^{\alpha-1} v^{\beta-1}du dv 421 421 \int_{u=0}^{\infty}\int_{v=0}^\infty e^{-u-v} u^{\alpha-1} v^{\beta-1}du dv
\label{f3Beta} 422 422 \label{f3Beta}
\end{equation} 423 423 \end{equation}
424 424
$u=st$, $v=s(1-t)$, $s=u+v$ et $t=u/(u+v)$ sont ensuite remplacées par le résultat du Jacobien \ref{eqJac}, menant ainsi à l'expression finale définie par l'équation \ref{f4Beta}. 425 425 $u=st$, $v=s(1-t)$, $s=u+v$ et $t=u/(u+v)$ sont ensuite remplacées par le résultat du Jacobien \ref{eqJac}, menant ainsi à l'expression finale définie par l'équation \ref{f4Beta}.
426 426
\begin{equation} 427 427 \begin{equation}
\left ( 428 428 \left (
\begin{matrix} 429 429 \begin{matrix}
\frac{\partial u}{\partial t} & \frac{\partial u}{\partial s}\\ 430 430 \frac{\partial u}{\partial t} & \frac{\partial u}{\partial s}\\
\frac{\partial v}{\partial t} & \frac{\partial v}{\partial s}\\ 431 431 \frac{\partial v}{\partial t} & \frac{\partial v}{\partial s}\\
\end{matrix} 432 432 \end{matrix}
\right ) = 433 433 \right ) =
\left ( 434 434 \left (
\begin{matrix} 435 435 \begin{matrix}
sdt & tds \\ 436 436 sdt & tds \\
-sdt & (1-t)ds\\ 437 437 -sdt & (1-t)ds\\
\end{matrix} 438 438 \end{matrix}
\right ) = s \; dtds 439 439 \right ) = s \; dtds
\label{eqJac} 440 440 \label{eqJac}
\end{equation} 441 441 \end{equation}
442 442
\begin{equation} 443 443 \begin{equation}
\int_{s=0}^\infty \int_{t=0}^1 e^{-s}(st)^{\alpha-1}(s(1-t))^{\beta-1}s \; dsdt 444 444 \int_{s=0}^\infty \int_{t=0}^1 e^{-s}(st)^{\alpha-1}(s(1-t))^{\beta-1}s \; dsdt
\label{f4Beta} 445 445 \label{f4Beta}
\end{equation} 446 446 \end{equation}
447 447
Viennent ensuite les équations \ref{f5Beta} et \ref{f6Beta} en exprimant les intégrales en fonction des variables de substitution indépendantes $s$ et $t$. 448 448 Viennent ensuite les équations \ref{f5Beta} et \ref{f6Beta} en exprimant les intégrales en fonction des variables de substitution indépendantes $s$ et $t$.
%\colorbox{yellow}{, elles expriment?} 449 449 %\colorbox{yellow}{, elles expriment?}
450 450
\begin{equation} 451 451 \begin{equation}
\int_{s=0}^\infty e^{-s}s^{\alpha+\beta-1}ds \int_{t=0}^1 t^{\alpha-1}(1-t)^{\beta-1}dt 452 452 \int_{s=0}^\infty e^{-s}s^{\alpha+\beta-1}ds \int_{t=0}^1 t^{\alpha-1}(1-t)^{\beta-1}dt
\label{f5Beta} 453 453 \label{f5Beta}
\end{equation} 454 454 \end{equation}
455 455
\begin{equation} 456 456 \begin{equation}
Beta(\theta | \alpha, \beta) = \frac{\int_0^\infty e^{-s} s^{\alpha+\beta-1}ds}{\int_{s=0}^\infty e^{-s}s^{\alpha+\beta-1}ds \int_{t=0}^1 t^{\alpha-1}(1-t)^{\beta-1}dt 457 457 Beta(\theta | \alpha, \beta) = \frac{\int_0^\infty e^{-s} s^{\alpha+\beta-1}ds}{\int_{s=0}^\infty e^{-s}s^{\alpha+\beta-1}ds \int_{t=0}^1 t^{\alpha-1}(1-t)^{\beta-1}dt
}\theta^{\alpha-1}(1-\theta)^{\beta-1} 458 458 }\theta^{\alpha-1}(1-\theta)^{\beta-1}
\label{f6Beta} 459 459 \label{f6Beta}
\end{equation} 460 460 \end{equation}
461 461
Finalement, la famille de fonctions de distribution Beta peut être calculée selon l'équation \ref{f7Beta}. 462 462 Finalement, la famille de fonctions de distribution Beta peut être calculée selon l'équation \ref{f7Beta}.
463 463
\begin{equation} 464 464 \begin{equation}
Beta(\theta | \alpha, \beta) = \frac{\theta^{\alpha-1}(1-\theta)^{\beta-1}}{\int_{0}^1 t^{\alpha-1}(1-t)^{\beta-1}dt 465 465 Beta(\theta | \alpha, \beta) = \frac{\theta^{\alpha-1}(1-\theta)^{\beta-1}}{\int_{0}^1 t^{\alpha-1}(1-t)^{\beta-1}dt
} 466 466 }
\label{f7Beta} 467 467 \label{f7Beta}
\end{equation} 468 468 \end{equation}
469 469
L'évolution de l'algorithme de recommandation TS résulte du changement des distributions de probabilité. Il est à noter qu'au moment de quantifier l'évolution, le changement et la variabilité doivent être calculés en fonction du temps. Les distributions de probabilités peuvent être comparées pour déterminer leur degré de similitude. 470 470 L'évolution de l'algorithme de recommandation TS résulte du changement des distributions de probabilité. Il est à noter qu'au moment de quantifier l'évolution, le changement et la variabilité doivent être calculés en fonction du temps. Les distributions de probabilités peuvent être comparées pour déterminer leur degré de similitude.
471 471
Par ailleurs, l'apprentissage automatique utilise la divergence de Kullback-Liebler, qui décrit l'entropie relative de deux distributions de probabilités. Cette fonction est fondée sur le concept d'entropie et le résultat peut être interprété comme la quantité d'informations nécessaires pour obtenir la distribution de probabilité $q$ à partir de la distribution de probabilité $p$. Bien que largement utilisée, la divergence de Kullback-Liebler (équation \ref{dkl}) présente toutefois l'inconvénient de ne pas être une mesure symétrique car elle ne satisfait pas à l'inégalité triangulaire et n'est pas bornée \cite{Li_2024}. Pour remédier à cette difficulté, il est possible d'utiliser la divergence de Jensen-Shannon. 472 472 Par ailleurs, l'apprentissage automatique utilise la divergence de Kullback-Liebler, qui décrit l'entropie relative de deux distributions de probabilités. Cette fonction est fondée sur le concept d'entropie et le résultat peut être interprété comme la quantité d'informations nécessaires pour obtenir la distribution de probabilité $q$ à partir de la distribution de probabilité $p$. Bien que largement utilisée, la divergence de Kullback-Liebler (équation \ref{dkl}) présente toutefois l'inconvénient de ne pas être une mesure symétrique car elle ne satisfait pas à l'inégalité triangulaire et n'est pas bornée \cite{Li_2024}. Pour remédier à cette difficulté, il est possible d'utiliser la divergence de Jensen-Shannon.
473 473
\begin{equation} 474 474 \begin{equation}
D_{KL}(p(x),q(x))=\int_{-\infty}^{\infty}p(x) log \left(\frac{p(x)}{q(x)} \right)dx 475 475 D_{KL}(p(x),q(x))=\int_{-\infty}^{\infty}p(x) log \left(\frac{p(x)}{q(x)} \right)dx
\label{dkl} 476 476 \label{dkl}
\end{equation} 477 477 \end{equation}
478 478
La divergence de Jenser-Shannon est fondée sur la divergence de Kullback-Liebler. Une distribution de probabilité auxiliaire $m$ est créée dont la définition est fondée sur les distributions initiales $p$ et $q$ \cite{Kim2024}. L'équation \ref{djs} montre la définition formelle de la divergence de Jensen-Shannon, où $m(x)$ est une distribution de mélange de probabilités fondée sur $p(x)$ et $q(x)$. Celle-ci est calculée selon l'équation \ref{djs2}. Les distributions de probabilité à comparer doivent être continues et définies dans le même domaine. 479 479 La divergence de Jenser-Shannon est fondée sur la divergence de Kullback-Liebler. Une distribution de probabilité auxiliaire $m$ est créée dont la définition est fondée sur les distributions initiales $p$ et $q$ \cite{Kim2024}. L'équation \ref{djs} montre la définition formelle de la divergence de Jensen-Shannon, où $m(x)$ est une distribution de mélange de probabilités fondée sur $p(x)$ et $q(x)$. Celle-ci est calculée selon l'équation \ref{djs2}. Les distributions de probabilité à comparer doivent être continues et définies dans le même domaine.
480 480
%Jensen-Shannon Divergence (equations \ref{djs}, \ref{djs2}).\\ 481 481 %Jensen-Shannon Divergence (equations \ref{djs}, \ref{djs2}).\\
482 482
\begin{equation} 483 483 \begin{equation}
D_{JS}(p(x),q(x))=\frac{1}{2}D_{KL}(p(x), m(x))+\frac{1}{2}D_{KL}(q(x), m(x)) 484 484 D_{JS}(p(x),q(x))=\frac{1}{2}D_{KL}(p(x), m(x))+\frac{1}{2}D_{KL}(q(x), m(x))
\label{djs} 485 485 \label{djs}
\end{equation} 486 486 \end{equation}
487 487
\begin{equation} 488 488 \begin{equation}
m(x)=\frac{1}{2}p(x)+\frac{1}{2}q(x) 489 489 m(x)=\frac{1}{2}p(x)+\frac{1}{2}q(x)
\label{djs2} 490 490 \label{djs2}
\end{equation} 491 491 \end{equation}
492 492
La prédiction utilisée dans le module proposé ici a été présentée dans le chapitre \ref{ChapESCBR}. Il s'agit d'un algorithme d'empilement de raisonnement à partir de cas mettant en œuvre deux niveaux d'intégration. Le module utilise globalement la stratégie d'empilement pour exécuter plusieurs algorithmes afin de rechercher des informations dans un ensemble de données et générer des solutions à différents problèmes génériques. En outre une étape d'évaluation permet de sélectionner la solution la plus optimale pour un problème donné en fonction d'une métrique adaptative définie pour les problèmes de régression. 493 493 La prédiction utilisée dans le module proposé ici a été présentée dans le chapitre \ref{ChapESCBR}. Il s'agit d'un algorithme d'empilement de raisonnement à partir de cas mettant en œuvre deux niveaux d'intégration. Le module utilise globalement la stratégie d'empilement pour exécuter plusieurs algorithmes afin de rechercher des informations dans un ensemble de données et générer des solutions à différents problèmes génériques. En outre une étape d'évaluation permet de sélectionner la solution la plus optimale pour un problème donné en fonction d'une métrique adaptative définie pour les problèmes de régression.
494 494
\subsection{Algorithme Proposé} 495 495 \subsection{Algorithme Proposé}
\label{Sec:TS-ESCBR-SMA} 496 496 \label{Sec:TS-ESCBR-SMA}
497 497
Nous proposons ici une intégration de l'algorithme d'adaptation stochastique (fondé sur l'échantillonnage de Thompson) avec ESCBR-SMA. Ainsi, le module de recommandation révise la séance en fonction des notes de l'apprenant et ESCBR-SMA effectue une prédiction pour valider l'adaptation générée. 498 498 Nous proposons ici une intégration de l'algorithme d'adaptation stochastique (fondé sur l'échantillonnage de Thompson) avec ESCBR-SMA. Ainsi, le module de recommandation révise la séance en fonction des notes de l'apprenant et ESCBR-SMA effectue une prédiction pour valider l'adaptation générée.
499 499
L'idée est d'unifier les deux modules en se fondant à la fois sur des informations locales (recommandation fondée sur l'échantillonnage de Thompson (TS) et les informations propres à l'apprenant), et sur des informations globales (cas similaires de la base de connaissances du système de RàPC en suivant le principe du paradoxe de Stein) car le RàPC combine différents observations pour réaliser une estimation. 500 500 L'idée est d'unifier les deux modules en se fondant à la fois sur des informations locales (recommandation fondée sur l'échantillonnage de Thompson (TS) et les informations propres à l'apprenant), et sur des informations globales (cas similaires de la base de connaissances du système de RàPC en suivant le principe du paradoxe de Stein) car le RàPC combine différents observations pour réaliser une estimation.
501 501
L'architecture de l'algorithme est présentée sur la figure \ref{fig:Amodel}, où l'on peut voir que les deux algorithmes TS et RàPC sont exécutés en parallèle et indépendamment. Des synchronisations sont faites après obtention des résultats de chaque module. Ces résultats sont unifiés via d'une fonction de pondération. La recommandation finale est calculée selon l'équation \ref{eqMixModels_}. Le \textit{paradoxe de Simpson} est un paradoxe dans lequel un phénomène observé dans plusieurs groupes s'inverse lorsque les groupes sont combinés\cite{10.1145/3578337.3605122}. L'unification d'ensembles de données différents peut atténuer ce paradoxe \cite{lei2024analysis}. 502 502 L'architecture de l'algorithme est présentée sur la figure \ref{fig:Amodel}, où l'on peut voir que les deux algorithmes TS et RàPC sont exécutés en parallèle et indépendamment. Des synchronisations sont faites après obtention des résultats de chaque module. Ces résultats sont unifiés via d'une fonction de pondération. La recommandation finale est calculée selon l'équation \ref{eqMixModels_}. Le \textit{paradoxe de Simpson} est un paradoxe dans lequel un phénomène observé dans plusieurs groupes s'inverse lorsque les groupes sont combinés\cite{10.1145/3578337.3605122}. L'unification d'ensembles de données différents peut atténuer ce paradoxe \cite{lei2024analysis}.
503 503
\begin{figure} 504 504 \begin{figure}
\centering 505 505 \centering
\includegraphics[width=0.7\linewidth]{Figures/Model.png} 506 506 \includegraphics[width=0.7\linewidth]{Figures/Model.png}
\caption{Schéma de l'architecture de l'algorithme proposé} 507 507 \caption{Schéma de l'architecture de l'algorithme proposé}
\label{fig:Amodel} 508 508 \label{fig:Amodel}
\end{figure} 509 509 \end{figure}
510 510
La première étape est l'adaptation avec l'échantillonnage de Thompson. Vient ensuite la prédiction via ECBR-SMA. Enfin, le processus se termine par la prise de décision concernant la suite de la séance à délivrer à l'apprenant. Le système de recommandation obtient une valeur de probabilité pour tous les niveaux de complexité et l'ECBR-SMA évalue la proposition avec une prédiction pour chaque niveau de complexité. Le tableau \ref{tabvp} présente les variables et les paramètres du module proposé ainsi que les mesures employées. 511 511 La première étape est l'adaptation avec l'échantillonnage de Thompson. Vient ensuite la prédiction via ECBR-SMA. Enfin, le processus se termine par la prise de décision concernant la suite de la séance à délivrer à l'apprenant. Le système de recommandation obtient une valeur de probabilité pour tous les niveaux de complexité et l'ECBR-SMA évalue la proposition avec une prédiction pour chaque niveau de complexité. Le tableau \ref{tabvp} présente les variables et les paramètres du module proposé ainsi que les mesures employées.
512 512
\begin{table}[!ht] 513 513 \begin{table}[!ht]
\centering 514 514 \centering
\footnotesize 515 515 \footnotesize
\begin{tabular}{c|c|>{\centering\arraybackslash}p{8cm}|c} 516 516 \begin{tabular}{c|c|>{\centering\arraybackslash}p{8cm}|c}
ID&Type&Description&Domain\\ 517 517 ID&Type&Description&Domain\\
\hline 518 518 \hline
$\alpha$&p&Paramètre de la distribution beta&$[1, \infty] \in \mathbb{R}$\\ 519 519 $\alpha$&p&Paramètre de la distribution beta&$[1, \infty] \in \mathbb{R}$\\
$\beta$&p&Paramètre de la distribution beta&$[1, \infty] \in \mathbb{R}$\\ 520 520 $\beta$&p&Paramètre de la distribution beta&$[1, \infty] \in \mathbb{R}$\\
$t$&p&Numéro de l'itération&$\mathbb{N}$\\ 521 521 $t$&p&Numéro de l'itération&$\mathbb{N}$\\
$c$&p&Niveau de complexité&$\mathbb{N}$\\ 522 522 $c$&p&Niveau de complexité&$\mathbb{N}$\\
$x_c$&p&Notes moyennes par niveau de complexité $c$&$\mathbb{R}$\\ 523 523 $x_c$&p&Notes moyennes par niveau de complexité $c$&$\mathbb{R}$\\
$y_c$&p&Nombre de questions par niveau de complexité $c$&$\mathbb{N}$\\ 524 524 $y_c$&p&Nombre de questions par niveau de complexité $c$&$\mathbb{N}$\\
$r$&f&Fonction suivie pour la recommandation&$[0,1] \in \mathbb{R}$\\ 525 525 $r$&f&Fonction suivie pour la recommandation&$[0,1] \in \mathbb{R}$\\
$k_{t,c}$&v&Évolution de la connaissance dans le temps $t$ pour le niveau de complexité $c$&$[0,1] \in \mathbb{R}$\\ 526 526 $k_{t,c}$&v&Évolution de la connaissance dans le temps $t$ pour le niveau de complexité $c$&$[0,1] \in \mathbb{R}$\\
$vk_{t,c}$&v&Évolution de la connaissance pour chaque niveau de complexité $c$&$\mathbb{R}$\\ 527 527 $vk_{t,c}$&v&Évolution de la connaissance pour chaque niveau de complexité $c$&$\mathbb{R}$\\
$TS_c$&v&Récompense d'échantillonnage de Thompson pour un niveau de complexité $c$&$[0,1] \in \mathbb{R}$\\ 528 528 $TS_c$&v&Récompense d'échantillonnage de Thompson pour un niveau de complexité $c$&$[0,1] \in \mathbb{R}$\\
$TSN_c$&v&Normalisation de $TS_c$ avec d'autres niveaux de complexité&$[0,1] \in \mathbb{R}$\\ 529 529 $TSN_c$&v&Normalisation de $TS_c$ avec d'autres niveaux de complexité&$[0,1] \in \mathbb{R}$\\
$ESCBR_c$&v&Prédiction de la note pour un niveau de complexité $c$&$\mathbb{R}_+$\\ 530 530 $ESCBR_c$&v&Prédiction de la note pour un niveau de complexité $c$&$\mathbb{R}_+$\\
$p_c$&f&Fonction de densité de probabilité pour le niveau de complexité $c$&$\mathbb{R}_+$\\ 531 531 $p_c$&f&Fonction de densité de probabilité pour le niveau de complexité $c$&$\mathbb{R}_+$\\
$D_{JS}$&f&Divergence de Jensen-Shannon&$[0,1] \in \mathbb{R}$\\ 532 532 $D_{JS}$&f&Divergence de Jensen-Shannon&$[0,1] \in \mathbb{R}$\\
533 533
\end{tabular} 534 534 \end{tabular}
\caption{Paramètres (p), variables (v) et fonctions (f) de l'algorithme proposé et des métriques utilisées} 535 535 \caption{Paramètres (p), variables (v) et fonctions (f) de l'algorithme proposé et des métriques utilisées}
\label{tabvp} 536 536 \label{tabvp}
\end{table} 537 537 \end{table}
538 538
Pour rappel, le processus de recommandation se fait en trois étapes. Tout d'abord, il est nécessaire d'avoir des valeurs aléatoires pour chaque niveau de complexité $c$ en utilisant les distributions de probabilité générées avec le algorithme TS (équation \ref{IntEq1_}). Une fois que toutes les valeurs de probabilité correspondant à tous les niveaux de complexité ont été obtenues, la normalisation de toutes ces valeurs est calculée selon l'équation \ref{IntEq2_}. Les valeurs de normalisation servent de paramètres de priorité pour les prédictions effectuées par ESCBR-SMA (équation \ref{eqMixModels_}). La recommandation finalement proposée est celle dont la valeur est la plus élevée. 539 539 Pour rappel, le processus de recommandation se fait en trois étapes. Tout d'abord, il est nécessaire d'avoir des valeurs aléatoires pour chaque niveau de complexité $c$ en utilisant les distributions de probabilité générées avec le algorithme TS (équation \ref{IntEq1_}). Une fois que toutes les valeurs de probabilité correspondant à tous les niveaux de complexité ont été obtenues, la normalisation de toutes ces valeurs est calculée selon l'équation \ref{IntEq2_}. Les valeurs de normalisation servent de paramètres de priorité pour les prédictions effectuées par ESCBR-SMA (équation \ref{eqMixModels_}). La recommandation finalement proposée est celle dont la valeur est la plus élevée.
540 540
\begin{equation} 541 541 \begin{equation}
TS_c=rand(Beta(\alpha_c, \beta_c)) 542 542 TS_c=rand(Beta(\alpha_c, \beta_c))
\label{IntEq1_} 543 543 \label{IntEq1_}
\end{equation} 544 544 \end{equation}
545 545
\begin{equation} 546 546 \begin{equation}
TSN_c=\frac{TS_c}{\sum_{i=0}^4TS_i} 547 547 TSN_c=\frac{TS_c}{\sum_{i=0}^4TS_i}
\label{IntEq2_} 548 548 \label{IntEq2_}
\end{equation} 549 549 \end{equation}
550 550
\begin{equation} 551 551 \begin{equation}
n_c=argmax_c(TSN_c*ESCBR_c) 552 552 n_c=argmax_c(TSN_c*ESCBR_c)
\label{eqMixModels_} 553 553 \label{eqMixModels_}
\end{equation} 554 554 \end{equation}
555 555
\subsection{Résultats et Discussion} 556 556 \subsection{Résultats et Discussion}
557 557
Le principal inconvénient posé par la validation d'un tel système « en situation réelle » est la difficulté à collecter des données et à évaluer des systèmes différents dans des conditions strictement similaires. Cette difficulté est accentuée dans les contextes d'apprentissage autorégulés, puisque les apprenants peuvent quitter la plateforme d'apprentissage à tout moment rendant ainsi les données incomplètes \cite{badier:hal-04092828}. 558 558 Le principal inconvénient posé par la validation d'un tel système « en situation réelle » est la difficulté à collecter des données et à évaluer des systèmes différents dans des conditions strictement similaires. Cette difficulté est accentuée dans les contextes d'apprentissage autorégulés, puisque les apprenants peuvent quitter la plateforme d'apprentissage à tout moment rendant ainsi les données incomplètes \cite{badier:hal-04092828}.
559 559
Pour cette raison, les différentes approches proposées ont été testées sur des données générées : les notes et les temps de réponse de 1000 apprenants fictifs et cinq questions par niveau de complexité. Les notes des apprenants ont été créées en suivant la loi de distribution \textit{logit-normale} que nous avons jugée proche de la réalité de la progression d'un apprentissage.\colorbox{yellow}{lien vers la base générée? ==> Ref de l'url du git} 560 560 Pour cette raison, les différentes approches proposées ont été testées sur des données générées : les notes et les temps de réponse de 1000 apprenants fictifs et cinq questions par niveau de complexité. Les notes des apprenants ont été créées en suivant la loi de distribution \textit{logit-normale} que nous avons jugée proche de la réalité de la progression d'un apprentissage \cite{Data}.
561 561
Quatre séries de tests ont été effectuées. La première série a été menée sur le système AI-VT intégrant le système de RàPC pour la régression afin de démontrer la capacité de l'algorithme à prédire les notes à différents niveaux de complexité. 562 562 Quatre séries de tests ont été effectuées. La première série a été menée sur le système AI-VT intégrant le système de RàPC pour la régression afin de démontrer la capacité de l'algorithme à prédire les notes à différents niveaux de complexité.
La deuxième série de tests a évalué la progression des connaissances avec TS afin d'analyser la capacité du module à proposer des recommandations personnalisées. Lors de la troisième série de tests, nous avons comparé les algorithmes de recommandation BKT et TS. Enfin, lors de la quatrième série de tests, nous avons comparé TS seul et TS avec ESCBR-SMA. 563 563 La deuxième série de tests a évalué la progression des connaissances avec TS afin d'analyser la capacité du module à proposer des recommandations personnalisées. Lors de la troisième série de tests, nous avons comparé les algorithmes de recommandation BKT et TS. Enfin, lors de la quatrième série de tests, nous avons comparé TS seul et TS avec ESCBR-SMA.
564 564
\subsubsection{Régression avec ESCBR-SMA pour l'aide à l'apprentissage humain} 565 565 \subsubsection{Régression avec ESCBR-SMA pour l'aide à l'apprentissage humain}
566 566
Le SMA que nous avons implémenté utilise un raisonnement bayésien, ce qui permet aux agents d'apprendre des données et d'intéragir au cours de l'exécution et de l'exploration. 567 567 Le SMA que nous avons implémenté utilise un raisonnement bayésien, ce qui permet aux agents d'apprendre des données et d'intéragir au cours de l'exécution et de l'exploration.
568 568
ESCBR-SMA utilise une fonction noyau pour obtenir la meilleure approximation de la solution du problème cible. Dans notre cas, l'obtention de la meilleure solution est un problème NP-Difficile car la formulation est similaire au problème de Fermat-Weber à $N$ dimensions \cite{doi:10.1137/23M1592420}. 569 569 ESCBR-SMA utilise une fonction noyau pour obtenir la meilleure approximation de la solution du problème cible. Dans notre cas, l'obtention de la meilleure solution est un problème NP-Difficile car la formulation est similaire au problème de Fermat-Weber à $N$ dimensions \cite{doi:10.1137/23M1592420}.
570 570
Les différents scénarios du tableau \ref{tab:scenarios} ont été considérés dans un premier temps. Dans le scénario $E_1$, il s'agit de prédire la note d'un apprenant au premier niveau de complexité, après 3 questions. Le scénario $E_2$ considère les notes de 8 questions et l'objectif est de prédire la note de la neuvième question dans le même niveau de complexité. Le scénario $E_3$ interpole la neuvième note que l'apprenant obtiendrait si la neuvième question était de niveau de complexité supérieur à celui de la huitième question. Cette interpolation est faite sur la base des notes obtenues aux quatre questions précédentes. Le scénario $E_4$ considère 4 questions et le système doit interpoler 2 notes dans un niveau de complexité supérieur. 571 571 Les différents scénarios du tableau \ref{tab:scenarios} ont été considérés dans un premier temps. Dans le scénario $E_1$, il s'agit de prédire la note d'un apprenant au premier niveau de complexité, après 3 questions. Le scénario $E_2$ considère les notes de 8 questions et l'objectif est de prédire la note de la neuvième question dans le même niveau de complexité. Le scénario $E_3$ interpole la neuvième note que l'apprenant obtiendrait si la neuvième question était de niveau de complexité supérieur à celui de la huitième question. Cette interpolation est faite sur la base des notes obtenues aux quatre questions précédentes. Le scénario $E_4$ considère 4 questions et le système doit interpoler 2 notes dans un niveau de complexité supérieur.
572 572
\begin{table}[!ht] 573 573 \begin{table}[!ht]
\centering 574 574 \centering
\begin{tabular}{ccc} 575 575 \begin{tabular}{ccc}
Scenario&Caractéristiques du problème&Dimension de la solution\\ 576 576 Scenario&Caractéristiques du problème&Dimension de la solution\\
\hline 577 577 \hline
$E_1$ & 5 & 1\\ 578 578 $E_1$ & 5 & 1\\
$E_2$ & 15& 1\\ 579 579 $E_2$ & 15& 1\\
$E_3$ & 9 & 1\\ 580 580 $E_3$ & 9 & 1\\
$E_4$ & 9 & 2\\ 581 581 $E_4$ & 9 & 2\\
\end{tabular} 582 582 \end{tabular}
\caption{Description des scénarios} 583 583 \caption{Description des scénarios}
\label{tab:scenarios} 584 584 \label{tab:scenarios}
\end{table} 585 585 \end{table}
586 586
ESCBR-SMA a été comparé aux neuf outils classiquement utilisés pour résoudre la régression consignés dans le tableau \ref{tabAlgs} et selon l'erreur quadratique moyenne (RMSE - \textit{Root Mean Squared Error}), l'erreur médiane absolue (MedAE - \textit{Median Absolute Error}) et l'erreur moyenne absolue (MAE - \textit{Mean Absolute Error}). 587 587 ESCBR-SMA a été comparé aux neuf outils classiquement utilisés pour résoudre la régression consignés dans le tableau \ref{tabAlgs} et selon l'erreur quadratique moyenne (RMSE - \textit{Root Mean Squared Error}), l'erreur médiane absolue (MedAE - \textit{Median Absolute Error}) et l'erreur moyenne absolue (MAE - \textit{Mean Absolute Error}).
588 588
\begin{table}[!ht] 589 589 \begin{table}[!ht]
\centering 590 590 \centering
\footnotesize 591 591 \footnotesize
\begin{tabular}{ll|ll} 592 592 \begin{tabular}{ll|ll}
ID&Algorithm&ID&Algorithm\\ 593 593 ID&Algorithm&ID&Algorithm\\
\hline 594 594 \hline
A1&Linear Regression&A6&Polinomial Regression\\ 595 595 A1&Linear Regression&A6&Polinomial Regression\\
A2&K-Nearest Neighbor&A7&Ridge Regression\\ 596 596 A2&K-Nearest Neighbor&A7&Ridge Regression\\
A3&Decision Tree&A8&Lasso Regression\\ 597 597 A3&Decision Tree&A8&Lasso Regression\\
A4&Random Forest (Ensemble)&A9&Gradient Boosting (Ensemble)\\ 598 598 A4&Random Forest (Ensemble)&A9&Gradient Boosting (Ensemble)\\
A5&Multi Layer Perceptron&A10&Proposed Ensemble Stacking RàPC\\ 599 599 A5&Multi Layer Perceptron&A10&Proposed Ensemble Stacking RàPC\\
\end{tabular} 600 600 \end{tabular}
\caption{Liste des algorithmes évalués } 601 601 \caption{Liste des algorithmes évalués }
\label{tabAlgs} 602 602 \label{tabAlgs}
\end{table} 603 603 \end{table}
604 604
Le tableau \ref{tab:results} présente les résultats obtenus par les 10 algorithmes sur les quatre scénarios. Ces résultats montrent qu'ESCBR-SMA (A10) et le \textit{Gradient Boosting} (A9) obtiennent toujours les deux meilleurs résultats. Si l'on considère uniquement la RMSE, ESCBR-SMA occupe toujours la première place sauf pour $E_3$ où il est deuxième. Inversement, en considérant l'erreur médiane absolue ou l'erreur moyenne absolue, A10 se classe juste après A9. ESCBR-SMA et le \textit{Gradient Boosting} sont donc efficaces pour interpoler les notes des apprenants. 605 605 Le tableau \ref{tab:results} présente les résultats obtenus par les 10 algorithmes sur les quatre scénarios. Ces résultats montrent qu'ESCBR-SMA (A10) et le \textit{Gradient Boosting} (A9) obtiennent toujours les deux meilleurs résultats. Si l'on considère uniquement la RMSE, ESCBR-SMA occupe toujours la première place sauf pour $E_3$ où il est deuxième. Inversement, en considérant l'erreur médiane absolue ou l'erreur moyenne absolue, A10 se classe juste après A9. ESCBR-SMA et le \textit{Gradient Boosting} sont donc efficaces pour interpoler les notes des apprenants.
606 606
\begin{table}[!ht] 607 607 \begin{table}[!ht]
\centering 608 608 \centering
\footnotesize 609 609 \footnotesize
\begin{tabular}{c|cccccccccc} 610 610 \begin{tabular}{c|cccccccccc}
&\multicolumn{10}{c}{\textbf{Algorithme}}\\ 611 611 &\multicolumn{10}{c}{\textbf{Algorithme}}\\
\hline 612 612 \hline
& A1&A2&A3&A4&A5&A6&A7&A8&A9&A10\\ 613 613 & A1&A2&A3&A4&A5&A6&A7&A8&A9&A10\\
\textbf{Scenario (Metrique)}\\ 614 614 \textbf{Scenario (Metrique)}\\
\hline 615 615 \hline
$E_1$ (RMSE)&0.625&0.565&0.741&0.56&0.606&0.626&0.626&0.681&0.541&\textbf{0.54}\\ 616 616 $E_1$ (RMSE)&0.625&0.565&0.741&0.56&0.606&0.626&0.626&0.681&0.541&\textbf{0.54}\\
$E_1$ (MedAE) & 0.387&0.35&0.46&0.338&0.384&0.387&0.387&0.453&\textbf{0.327}&0.347\\ 617 617 $E_1$ (MedAE) & 0.387&0.35&0.46&0.338&0.384&0.387&0.387&0.453&\textbf{0.327}&0.347\\
$E_1$ (MAE) &0.485&0.436&0.572&0.429&0.47&0.485&0.485&0.544&\textbf{0.414}&0.417\\ 618 618 $E_1$ (MAE) &0.485&0.436&0.572&0.429&0.47&0.485&0.485&0.544&\textbf{0.414}&0.417\\
\hline 619 619 \hline
$E_2$ (RMSE)& 0.562&0.588&0.78&0.571&0.61&0.562&0.562&0.622&0.557&\textbf{0.556}\\ 620 620 $E_2$ (RMSE)& 0.562&0.588&0.78&0.571&0.61&0.562&0.562&0.622&0.557&\textbf{0.556}\\
$E_2$ (MedAE)&0.351&0.357&0.464&0.344&0.398&0.351&0.351&0.415&\textbf{0.334}&0.346\\ 621 621 $E_2$ (MedAE)&0.351&0.357&0.464&0.344&0.398&0.351&0.351&0.415&\textbf{0.334}&0.346\\
$E_2$ (MAE)&0.433&0.448&0.591&0.437&0.478&0.433&0.433&0.495&\textbf{0.422}&0.429\\ 622 622 $E_2$ (MAE)&0.433&0.448&0.591&0.437&0.478&0.433&0.433&0.495&\textbf{0.422}&0.429\\
\hline 623 623 \hline
$E_3$ (RMSE)&0.591&0.59&0.79&0.57&0.632&0.591&0.591&0.644&\textbf{0.555}&0.558\\ 624 624 $E_3$ (RMSE)&0.591&0.59&0.79&0.57&0.632&0.591&0.591&0.644&\textbf{0.555}&0.558\\
$E_3$ (MedAE)&0.367&0.362&0.474&0.358&0.404&0.367&0.367&0.433&\textbf{0.336}&0.349\\ 625 625 $E_3$ (MedAE)&0.367&0.362&0.474&0.358&0.404&0.367&0.367&0.433&\textbf{0.336}&0.349\\
$E_3$ (MAE)&0.453&0.45&0.598&0.441&0.49&0.453&0.453&0.512&\textbf{0.427}&0.43\\ 626 626 $E_3$ (MAE)&0.453&0.45&0.598&0.441&0.49&0.453&0.453&0.512&\textbf{0.427}&0.43\\
\hline 627 627 \hline
$E_4$ (RMSE)&0.591&0.589&0.785&0.568&0.613&0.591&0.591&0.644&0.554&\textbf{0.549}\\ 628 628 $E_4$ (RMSE)&0.591&0.589&0.785&0.568&0.613&0.591&0.591&0.644&0.554&\textbf{0.549}\\
$E_4$ (MedAE)&0.367&0.362&0.465&0.57&0.375&0.367&0.367&0.433&\textbf{0.336}&0.343\\ 629 629 $E_4$ (MedAE)&0.367&0.362&0.465&0.57&0.375&0.367&0.367&0.433&\textbf{0.336}&0.343\\
$E_4$ (MAE)&0.453&0.45&0.598&0.438&0.466&0.453&0.453&0.512&0.426&\textbf{0.417}\\ 630 630 $E_4$ (MAE)&0.453&0.45&0.598&0.438&0.466&0.453&0.453&0.512&0.426&\textbf{0.417}\\
\end{tabular} 631 631 \end{tabular}
\caption{Erreurs moyennes et médianes des interpolations des 10 algorithmes sélectionnés sur les 4 scénarios considérés et obtenues après 100 exécutions.} 632 632 \caption{Erreurs moyennes et médianes des interpolations des 10 algorithmes sélectionnés sur les 4 scénarios considérés et obtenues après 100 exécutions.}
\label{tab:results} 633 633 \label{tab:results}
\end{table} 634 634 \end{table}
635 635
\subsubsection{Progression des connaissances} 636 636 \subsubsection{Progression des connaissances}
637 637
L'algorithme de recommandation TS est fondé sur le paradigme bayésien le rendant ainsi particulièrement adapté au problèmes liés à la limitation de la quantité de données et à une incertitude forte. Afin de quantifier la connaissance et de voir sa progression dans le temps avec TS, la divergence de Jensen-Shannon avec la famille de distribution Beta en $t$ et $t-1$ a été mesurée. L'équation \ref{eqprog1} décrit formellement le calcul à effectuer avec les distributions de probabilité en un temps $t$ pour un niveau de complexité $c$, en utilisant la définition $m$ (équation \ref{eqprog2}). 638 638 L'algorithme de recommandation TS est fondé sur le paradigme bayésien le rendant ainsi particulièrement adapté au problèmes liés à la limitation de la quantité de données et à une incertitude forte. Afin de quantifier la connaissance et de voir sa progression dans le temps avec TS, la divergence de Jensen-Shannon avec la famille de distribution Beta en $t$ et $t-1$ a été mesurée. L'équation \ref{eqprog1} décrit formellement le calcul à effectuer avec les distributions de probabilité en un temps $t$ pour un niveau de complexité $c$, en utilisant la définition $m$ (équation \ref{eqprog2}).
639 639
%\begin{equation} 640 640 %\begin{equation}
\begin{multline} 641 641 \begin{multline}
k_{t,c}=\frac{1}{2} 642 642 k_{t,c}=\frac{1}{2}
\int_{0}^{1}p_c(\alpha_t,\beta_t,x) log \left(\frac{p_c(\alpha_t,\beta_t,x)}{m(p_c(\alpha_{t-1},\beta_{t-1},x),p_c(\alpha_t,\beta_t,x))} \right)dx 643 643 \int_{0}^{1}p_c(\alpha_t,\beta_t,x) log \left(\frac{p_c(\alpha_t,\beta_t,x)}{m(p_c(\alpha_{t-1},\beta_{t-1},x),p_c(\alpha_t,\beta_t,x))} \right)dx
\\ 644 644 \\
+\frac{1}{2} 645 645 +\frac{1}{2}
\int_{0}^{1}p_c(\alpha_{t-1},\beta_{t-1},x) log \left(\frac{p_c(\alpha_{t-1},\beta_{t-1},x)}{m(p_c(\alpha_{t-1},\beta_{t-1},x),p_c(\alpha_t,\beta_t,x))} \right)dx 646 646 \int_{0}^{1}p_c(\alpha_{t-1},\beta_{t-1},x) log \left(\frac{p_c(\alpha_{t-1},\beta_{t-1},x)}{m(p_c(\alpha_{t-1},\beta_{t-1},x),p_c(\alpha_t,\beta_t,x))} \right)dx
\label{eqprog1} 647 647 \label{eqprog1}
\end{multline} 648 648 \end{multline}
%\end{equation} 649 649 %\end{equation}
650 650
\begin{multline} 651 651 \begin{multline}
m(p(\alpha_{(t-1)},\beta_{(t-1)},x),p(\alpha_{t},\beta_{t},x))=\frac{1}{2} \left( \frac{x^{\alpha_{(t-1)}-1}(1-x)^{\beta_{(t-1)}-1}}{\int_0^1 u^{\alpha_{(t-1)}-1}(1-u^{\beta_{(t-1)}-1})du} \right )\\ 652 652 m(p(\alpha_{(t-1)},\beta_{(t-1)},x),p(\alpha_{t},\beta_{t},x))=\frac{1}{2} \left( \frac{x^{\alpha_{(t-1)}-1}(1-x)^{\beta_{(t-1)}-1}}{\int_0^1 u^{\alpha_{(t-1)}-1}(1-u^{\beta_{(t-1)}-1})du} \right )\\
+\frac{1}{2} \left (\frac{x^{\alpha_{t}-1}(1-x)^{\beta_{t}-1}}{\int_0^1 u^{\alpha_{t}-1}(1-u^{\beta_{t}-1})du} \right ) 653 653 +\frac{1}{2} \left (\frac{x^{\alpha_{t}-1}(1-x)^{\beta_{t}-1}}{\int_0^1 u^{\alpha_{t}-1}(1-u^{\beta_{t}-1})du} \right )
%\end{equation} 654 654 %\end{equation}
\label{eqprog2} 655 655 \label{eqprog2}
\end{multline} 656 656 \end{multline}
657 657
La progression du nombre total de connaissances en $t$ est la somme des différences entre $t$ et $t-1$ pour tous les $c$ niveaux de complexité calculés avec la divergence de Jensen-Shannon (équation \ref{eqTEK}). Pour ce faire, nous évaluons la progression de la variabilité données par équation \ref{eqVarP}. 658 658 La progression du nombre total de connaissances en $t$ est la somme des différences entre $t$ et $t-1$ pour tous les $c$ niveaux de complexité calculés avec la divergence de Jensen-Shannon (équation \ref{eqTEK}). Pour ce faire, nous évaluons la progression de la variabilité données par équation \ref{eqVarP}.
659 659
\begin{equation} 660 660 \begin{equation}
vk_{t,c}=\begin{cases} 661 661 vk_{t,c}=\begin{cases}
D_{JS}(Beta(\alpha_{t,c},\beta_{t,c}), Beta(\alpha_{t+1,c},\beta_{t+1,c})), & \frac{\alpha_{t,c}}{\alpha_{t,c}+\beta_{t,c}} < \frac{\alpha_{t+1,c}}{\alpha_{t+1,c}+\beta_{t+1,c}}\\ 662 662 D_{JS}(Beta(\alpha_{t,c},\beta_{t,c}), Beta(\alpha_{t+1,c},\beta_{t+1,c})), & \frac{\alpha_{t,c}}{\alpha_{t,c}+\beta_{t,c}} < \frac{\alpha_{t+1,c}}{\alpha_{t+1,c}+\beta_{t+1,c}}\\
-D_{JS}(Beta(\alpha_{t,c},\beta_{t,c}), Beta(\alpha_{t+1,c},\beta_{t+1,c})),& Otherwise 663 663 -D_{JS}(Beta(\alpha_{t,c},\beta_{t,c}), Beta(\alpha_{t+1,c},\beta_{t+1,c})),& Otherwise
\end{cases} 664 664 \end{cases}
\label{eqVarP} 665 665 \label{eqVarP}
\end{equation} 666 666 \end{equation}
667 667
\begin{equation} 668 668 \begin{equation}
k_t=\sum_{c=4}^{c=0 \lor k_t \neq 0} 669 669 k_t=\sum_{c=4}^{c=0 \lor k_t \neq 0}
\begin{cases} 670 670 \begin{cases}
\alpha_{c-1} vk_{t,c-1};&vk_{t,c} > 0\\ 671 671 \alpha_{c-1} vk_{t,c-1};&vk_{t,c} > 0\\
0;&Otherwise 672 672 0;&Otherwise
\end{cases} 673 673 \end{cases}
\label{eqTEK} 674 674 \label{eqTEK}
\end{equation} 675 675 \end{equation}
676 676
\begin{figure}[!ht] 677 677 \begin{figure}[!ht]
\centering 678 678 \centering
\includegraphics[width=\textwidth]{Figures/kEvol_TS.jpg} 679 679 \includegraphics[width=\textwidth]{Figures/kEvol_TS.jpg}
\caption{Progression des connaissances avec l'échantillonnage de Thompson selon la divergence de Jensen-Shannon} 680 680 \caption{Progression des connaissances avec l'échantillonnage de Thompson selon la divergence de Jensen-Shannon}
\label{fig:evolution} 681 681 \label{fig:evolution}
\end{figure} 682 682 \end{figure}
683 683
La figure \ref{fig:evolution} montre la progression cumulée des connaissances sur les quinze questions d'une même séance d'entrainement. L'augmentation de la moyenne du niveau de connaissance entre la première et la dernière question de la même séance montre que tous les apprenants ont statistiquement augmenté leur niveau de connaissance. La variabilité augmente à partir de la première question jusqu'à la question neuf, où le système a acquis plus d'informations sur les apprenants. À ce stade, la variabilité diminue et la moyenne augmente. 684 684 La figure \ref{fig:evolution} montre la progression cumulée des connaissances sur les quinze questions d'une même séance d'entrainement. L'augmentation de la moyenne du niveau de connaissance entre la première et la dernière question de la même séance montre que tous les apprenants ont statistiquement augmenté leur niveau de connaissance. La variabilité augmente à partir de la première question jusqu'à la question neuf, où le système a acquis plus d'informations sur les apprenants. À ce stade, la variabilité diminue et la moyenne augmente.
685 685
\subsubsection{Système de recommandation avec un jeu de données d'étudiants réels} 686 686 \subsubsection{Système de recommandation avec un jeu de données d'étudiants réels}
687 687
Le système de recommandation TS a été testé avec un ensemble de données adaptées extraites de données réelles d'interactions d'étudiants avec un environnement d'apprentissage virtuel pour différents cours \cite{Kuzilek2017}. Cet ensemble contient les notes de $23366$ apprenants dans différents cours. Les apprenants ont été évalués selon différentes modalités (partiels, projets, QCM). Cet ensemble de données a pu être intégré au jeu de données d'AI-VT (notes, temps de réponse et 5 niveaux de complexité). Le test a consisté à générer une recommandation pour l'avant dernière question en fonction des notes précédentes. Ce test a été exécuté 100 fois pour chaque apprenant. Les nombres de questions recommandées sont reportés sur la figure \ref{fig:stabilityBP} pour chaque niveau de complexité. Celle-ci montre que malgré la stochasticité, la variance globale dans tous les niveaux de complexité est faible en fonction du nombre total d'apprenants et du nombre total de recommandations, et démontre ainsi la stabilité de l'algorithme.\\ 688 688 Le système de recommandation TS a été testé avec un ensemble de données adaptées extraites de données réelles d'interactions d'étudiants avec un environnement d'apprentissage virtuel pour différents cours \cite{Kuzilek2017}. Cet ensemble contient les notes de $23366$ apprenants dans différents cours. Les apprenants ont été évalués selon différentes modalités (partiels, projets, QCM) \cite{Data}. Cet ensemble de données a pu être intégré au jeu de données d'AI-VT (notes, temps de réponse et 5 niveaux de complexité). Le test a consisté à générer une recommandation pour l'avant dernière question en fonction des notes précédentes. Ce test a été exécuté 100 fois pour chaque apprenant. Les nombres de questions recommandées sont reportés sur la figure \ref{fig:stabilityBP} pour chaque niveau de complexité. Celle-ci montre que malgré la stochasticité, la variance globale dans tous les niveaux de complexité est faible en fonction du nombre total d'apprenants et du nombre total de recommandations, et démontre ainsi la stabilité de l'algorithme.\\
689 689
\begin{figure}[!ht] 690 690 \begin{figure}[!ht]
\centering 691 691 \centering
\includegraphics[width=1\linewidth]{Figures/stabilityBoxplot.png} 692 692 \includegraphics[width=1\linewidth]{Figures/stabilityBoxplot.png}
\caption{Nombre de recommandations par niveau de complexité} 693 693 \caption{Nombre de recommandations par niveau de complexité}
\label{fig:stabilityBP} 694 694 \label{fig:stabilityBP}
\end{figure} 695 695 \end{figure}
696 696
La précision de la recommandation pour tous les apprenants est évaluée en considérant comme comportement correct deux états : i) l'algorithme recommande un niveau où l'apprenant a une note supérieure ou égal à 6 et ii) l'algorithme recommande un niveau inférieur au niveau réel évalué par l'apprenant. Le premier cas montre que l'algorithme a identifié le moment précis où l'apprenant doit augmenter le niveau de complexité, le second cas permet d'établir que l'algorithme propose de renforcer un niveau de complexité plus faible. Puis la précision est calculée comme le rapport entre le nombre d'états correspondant aux comportements corrects définis et le nombre total de recommandations. La figure \ref{fig:precision} montre les résultats de cette métrique après 100 exécutions.\\ 697 697 La précision de la recommandation pour tous les apprenants est évaluée en considérant comme comportement correct deux états : i) l'algorithme recommande un niveau où l'apprenant a une note supérieure ou égal à 6 et ii) l'algorithme recommande un niveau inférieur au niveau réel évalué par l'apprenant. Le premier cas montre que l'algorithme a identifié le moment précis où l'apprenant doit augmenter le niveau de complexité, le second cas permet d'établir que l'algorithme propose de renforcer un niveau de complexité plus faible. Puis la précision est calculée comme le rapport entre le nombre d'états correspondant aux comportements corrects définis et le nombre total de recommandations. La figure \ref{fig:precision} montre les résultats de cette métrique après 100 exécutions.\\
698 698
\begin{figure}[!ht] 699 699 \begin{figure}[!ht]
\centering 700 700 \centering
\includegraphics[width=1\linewidth]{Figures/precision.png} 701 701 \includegraphics[width=1\linewidth]{Figures/precision.png}
\caption{Précision de la recommandation} 702 702 \caption{Précision de la recommandation}
\label{fig:precision} 703 703 \label{fig:precision}
\end{figure} 704 704 \end{figure}
705 705
\subsubsection{Comparaison entre TS et BKT} 706 706 \subsubsection{Comparaison entre TS et BKT}
707 707
La figure \ref{fig:EvGrades} permet de comparer la recommandation fondée sur l'échantillonnage de Thompson et celle fondée sur BKT. Cette figure montre l'évolution des notes des apprenants en fonction du nombre de questions auxquelles ils répondent dans la même séance. Dans ce cas, le TS génère moins de variabilité que BKT, mais les évolutions induites par les deux systèmes restent globalement très similaires. 708 708 La figure \ref{fig:EvGrades} permet de comparer la recommandation fondée sur l'échantillonnage de Thompson et celle fondée sur BKT. Cette figure montre l'évolution des notes des apprenants en fonction du nombre de questions auxquelles ils répondent dans la même séance. Dans ce cas, le TS génère moins de variabilité que BKT, mais les évolutions induites par les deux systèmes restent globalement très similaires.
709 709
\begin{figure}[!ht] 710 710 \begin{figure}[!ht]
\centering 711 711 \centering
\includegraphics[width=\textwidth]{Figures/GradesEv.jpg} 712 712 \includegraphics[width=\textwidth]{Figures/GradesEv.jpg}
\caption{Comparaison de l'évolution des notes entre les systèmes fondés sur TS et BKT.} 713 713 \caption{Comparaison de l'évolution des notes entre les systèmes fondés sur TS et BKT.}
\label{fig:EvGrades} 714 714 \label{fig:EvGrades}
\end{figure} 715 715 \end{figure}
716 716
Toutefois, si l'on considère l'évolution du niveau de complexité recommandé (figure \ref{fig:EvCL}), TS fait évoluer le niveau de complexité des apprenants, alors que BKT a tendance à laisser les apprenants au même niveau de complexité. Autrement dit, avec BKT, il est difficile d'apprendre de nouveaux sujets ou des concepts plus complexes au sein du même domaine. En comparant les résultats des deux figures (figures \ref{fig:EvGrades} et \ref{fig:EvCL}), TS permet de faire progresser la moyenne des notes et facilite l'évolution des niveaux de complexité. 717 717 Toutefois, si l'on considère l'évolution du niveau de complexité recommandé (figure \ref{fig:EvCL}), TS fait évoluer le niveau de complexité des apprenants, alors que BKT a tendance à laisser les apprenants au même niveau de complexité. Autrement dit, avec BKT, il est difficile d'apprendre de nouveaux sujets ou des concepts plus complexes au sein du même domaine. En comparant les résultats des deux figures (figures \ref{fig:EvGrades} et \ref{fig:EvCL}), TS permet de faire progresser la moyenne des notes et facilite l'évolution des niveaux de complexité.
718 718
\begin{figure}[!ht] 719 719 \begin{figure}[!ht]
\centering 720 720 \centering
\includegraphics[width=\textwidth]{Figures/LevelsEv.jpg} 721 721 \includegraphics[width=\textwidth]{Figures/LevelsEv.jpg}
\caption{Comparaison de l'évolution des niveaux entre les systèmes de recommandation fondés sur BKT et TS} 722 722 \caption{Comparaison de l'évolution des niveaux entre les systèmes de recommandation fondés sur BKT et TS}
\label{fig:EvCL} 723 723 \label{fig:EvCL}
\relax 1 1 \relax
\providecommand\babel@aux[2]{} 2 2 \providecommand\babel@aux[2]{}
\@nameuse{bbl@beforestart} 3 3 \@nameuse{bbl@beforestart}
\catcode `:\active 4 4 \catcode `:\active
\catcode `;\active 5 5 \catcode `;\active
\catcode `!\active 6 6 \catcode `!\active
\catcode `?\active 7 7 \catcode `?\active
\providecommand\hyper@newdestlabel[2]{} 8 8 \providecommand\hyper@newdestlabel[2]{}
\providecommand\HyperFirstAtBeginDocument{\AtBeginDocument} 9 9 \providecommand\HyperFirstAtBeginDocument{\AtBeginDocument}
\HyperFirstAtBeginDocument{\ifx\hyper@anchor\@undefined 10 10 \HyperFirstAtBeginDocument{\ifx\hyper@anchor\@undefined
\global\let\oldnewlabel\newlabel 11 11 \global\let\oldnewlabel\newlabel
\gdef\newlabel#1#2{\newlabelxx{#1}#2} 12 12 \gdef\newlabel#1#2{\newlabelxx{#1}#2}
\gdef\newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}} 13 13 \gdef\newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}}
\AtEndDocument{\ifx\hyper@anchor\@undefined 14 14 \AtEndDocument{\ifx\hyper@anchor\@undefined
\let\newlabel\oldnewlabel 15 15 \let\newlabel\oldnewlabel
\fi} 16 16 \fi}
\fi} 17 17 \fi}
\global\let\hyper@last\relax 18 18 \global\let\hyper@last\relax
\gdef\HyperFirstAtBeginDocument#1{#1} 19 19 \gdef\HyperFirstAtBeginDocument#1{#1}
\providecommand\HyField@AuxAddToFields[1]{} 20 20 \providecommand\HyField@AuxAddToFields[1]{}
\providecommand\HyField@AuxAddToCoFields[2]{} 21 21 \providecommand\HyField@AuxAddToCoFields[2]{}
\providecommand \oddpage@label [2]{} 22 22 \providecommand \oddpage@label [2]{}
\babel@aux{french}{} 23 23 \babel@aux{french}{}
\@writefile{toc}{\contentsline {part}{I\hspace {1em}Contexte et Problématiques}{1}{part.1}\protected@file@percent } 24 24 \@writefile{toc}{\contentsline {part}{I\hspace {1em}Contexte et Problématiques}{1}{part.1}\protected@file@percent }
\citation{Nkambou} 25 25 \citation{Nkambou}
\citation{doi:10.1177/1754337116651013} 26 26 \citation{doi:10.1177/1754337116651013}
\@writefile{toc}{\contentsline {chapter}{\numberline {1}Introduction}{3}{chapter.1}\protected@file@percent } 27 27 \@writefile{toc}{\contentsline {chapter}{\numberline {1}Introduction}{3}{chapter.1}\protected@file@percent }
\@writefile{lof}{\addvspace {10\p@ }} 28 28 \@writefile{lof}{\addvspace {10\p@ }}
\@writefile{lot}{\addvspace {10\p@ }} 29 29 \@writefile{lot}{\addvspace {10\p@ }}
\@writefile{toc}{\contentsline {section}{\numberline {1.1}Contributions Principales}{4}{section.1.1}\protected@file@percent } 30 30 \@writefile{toc}{\contentsline {section}{\numberline {1.1}Contributions Principales}{4}{section.1.1}\protected@file@percent }
\@writefile{toc}{\contentsline {section}{\numberline {1.2}Plan de la thèse}{5}{section.1.2}\protected@file@percent } 31 31 \@writefile{toc}{\contentsline {section}{\numberline {1.2}Plan de la thèse}{5}{section.1.2}\protected@file@percent }
\@input{./chapters/contexte2.aux} 32 32 \@input{./chapters/contexte2.aux}
\@writefile{toc}{\contentsline {part}{II\hspace {1em}État de l'art}{21}{part.2}\protected@file@percent } 33 33 \@writefile{toc}{\contentsline {part}{II\hspace {1em}État de l'art}{21}{part.2}\protected@file@percent }
\@input{./chapters/EIAH.aux} 34 34 \@input{./chapters/EIAH.aux}
\@input{./chapters/CBR.aux} 35 35 \@input{./chapters/CBR.aux}
\@writefile{toc}{\contentsline {part}{III\hspace {1em}Contributions}{39}{part.3}\protected@file@percent } 36 36 \@writefile{toc}{\contentsline {part}{III\hspace {1em}Contributions}{39}{part.3}\protected@file@percent }
\@input{./chapters/Architecture.aux} 37 37 \@input{./chapters/Architecture.aux}
\@input{./chapters/ESCBR.aux} 38 38 \@input{./chapters/ESCBR.aux}
\@input{./chapters/TS.aux} 39 39 \@input{./chapters/TS.aux}
\@input{./chapters/Conclusions.aux} 40 40 \@input{./chapters/Conclusions.aux}
\@input{./chapters/Publications.aux} 41 41 \@input{./chapters/Publications.aux}
\bibstyle{apalike} 42 42 \bibstyle{apalike}
\bibdata{main.bib} 43 43 \bibdata{main.bib}
44 \bibcite{Data}{Dat, 2023}
\bibcite{UCI}{UCI, 2024} 44 45 \bibcite{UCI}{UCI, 2024}
\bibcite{doi:10.3233/AIC-1994-7104}{Aamodt and Plaza, 1994} 45 46 \bibcite{doi:10.3233/AIC-1994-7104}{Aamodt and Plaza, 1994}
\bibcite{NEURIPS2023_9d8cf124}{Abel et~al., 2023} 46 47 \bibcite{NEURIPS2023_9d8cf124}{Abel et~al., 2023}
\bibcite{ALABDULRAHMAN2021114061}{Alabdulrahman and Viktor, 2021} 47 48 \bibcite{ALABDULRAHMAN2021114061}{Alabdulrahman and Viktor, 2021}
\bibcite{Arthurs}{Arthurs et~al., 2019} 48 49 \bibcite{Arthurs}{Arthurs et~al., 2019}
\bibcite{Auer}{Auer et~al., 2021} 49 50 \bibcite{Auer}{Auer et~al., 2021}
\bibcite{badier:hal-04092828}{Badier et~al., 2023} 50 51 \bibcite{badier:hal-04092828}{Badier et~al., 2023}
\bibcite{BAKUROV2021100913}{Bakurov et~al., 2021} 51 52 \bibcite{BAKUROV2021100913}{Bakurov et~al., 2021}
\bibcite{10.1007/978-3-642-15973-2_50}{Butdee and Tichkiewitch, 2011} 52 53 \bibcite{10.1007/978-3-642-15973-2_50}{Butdee and Tichkiewitch, 2011}
\bibcite{CHIU2023100118}{Chiu et~al., 2023} 53 54 \bibcite{CHIU2023100118}{Chiu et~al., 2023}
\bibcite{cmc.2023.033417}{Choi et~al., 2023} 54 55 \bibcite{cmc.2023.033417}{Choi et~al., 2023}
\bibcite{Riesbeck1989}{C.K. and R.C., 1989} 55 56 \bibcite{Riesbeck1989}{C.K. and R.C., 1989}
\bibcite{10.1145/3459665}{Cunningham and Delany, 2021} 56 57 \bibcite{10.1145/3459665}{Cunningham and Delany, 2021}
\bibcite{DIDDEN2023338}{Didden et~al., 2023} 57 58 \bibcite{DIDDEN2023338}{Didden et~al., 2023}
\bibcite{EZALDEEN2022100700}{Ezaldeen et~al., 2022} 58 59 \bibcite{EZALDEEN2022100700}{Ezaldeen et~al., 2022}
\bibcite{10.1007/978-3-030-58342-2_5}{Feely et~al., 2020} 59 60 \bibcite{10.1007/978-3-030-58342-2_5}{Feely et~al., 2020}
\bibcite{10.1007/978-3-319-47096-2_11}{Grace et~al., 2016} 60 61 \bibcite{10.1007/978-3-319-47096-2_11}{Grace et~al., 2016}
\bibcite{9434422}{Gupta et~al., 2021} 61 62 \bibcite{9434422}{Gupta et~al., 2021}
\bibcite{hajduk2019cognitive}{Hajduk et~al., 2019} 62 63 \bibcite{hajduk2019cognitive}{Hajduk et~al., 2019}
\bibcite{doi:10.1177/1754337116651013}{Henriet et~al., 2017} 63 64 \bibcite{doi:10.1177/1754337116651013}{Henriet et~al., 2017}
\bibcite{10.1007/978-3-030-01081-2_9}{Henriet and Greffier, 2018} 64 65 \bibcite{10.1007/978-3-030-01081-2_9}{Henriet and Greffier, 2018}
\bibcite{HIPOLITO2023103510}{Hipólito and Kirchhoff, 2023} 65 66 \bibcite{HIPOLITO2023103510}{Hipólito and Kirchhoff, 2023}
\bibcite{Hoang}{Hoang, 2018} 66 67 \bibcite{Hoang}{Hoang, 2018}
\bibcite{HU2025127130}{Hu et~al., 2025} 67 68 \bibcite{HU2025127130}{Hu et~al., 2025}
\bibcite{HUANG2023104684}{Huang et~al., 2023} 68 69 \bibcite{HUANG2023104684}{Huang et~al., 2023}
\bibcite{INGKAVARA2022100086}{Ingkavara et~al., 2022} 69 70 \bibcite{INGKAVARA2022100086}{Ingkavara et~al., 2022}
\bibcite{Daubias2011}{Jean-Daubias, 2011} 70 71 \bibcite{Daubias2011}{Jean-Daubias, 2011}
\bibcite{JUNG20095695}{Jung et~al., 2009} 71 72 \bibcite{JUNG20095695}{Jung et~al., 2009}
\bibcite{KAMALI2023110242}{Kamali et~al., 2023} 72 73 \bibcite{KAMALI2023110242}{Kamali et~al., 2023}
\bibcite{Kim2024}{Kim, 2024} 73 74 \bibcite{Kim2024}{Kim, 2024}
\bibcite{KOLODNER1983281}{Kolodner, 1983} 74 75 \bibcite{KOLODNER1983281}{Kolodner, 1983}
\bibcite{Kuzilek2017}{Kuzilek et~al., 2017} 75 76 \bibcite{Kuzilek2017}{Kuzilek et~al., 2017}
\bibcite{LALITHA2020583}{Lalitha and Sreeja, 2020} 76 77 \bibcite{LALITHA2020583}{Lalitha and Sreeja, 2020}
\bibcite{lei2024analysis}{Lei, 2024} 77 78 \bibcite{lei2024analysis}{Lei, 2024}
\bibcite{min8100434}{Leikola et~al., 2018} 78 79 \bibcite{min8100434}{Leikola et~al., 2018}
\bibcite{10.1007/978-3-030-58342-2_20}{Lepage et~al., 2020} 79 80 \bibcite{10.1007/978-3-030-58342-2_20}{Lepage et~al., 2020}
\bibcite{Li_2024}{Li et~al., 2024} 80 81 \bibcite{Li_2024}{Li et~al., 2024}
\bibcite{10.3389/fgene.2021.600040}{Liang et~al., 2021} 81 82 \bibcite{10.3389/fgene.2021.600040}{Liang et~al., 2021}
\bibcite{9870279}{Lin, 2022} 82 83 \bibcite{9870279}{Lin, 2022}
\bibcite{Liu2023}{Liu and Yu, 2023} 83 84 \bibcite{Liu2023}{Liu and Yu, 2023}
\bibcite{jmse11050890}{Louvros et~al., 2023} 84 85 \bibcite{jmse11050890}{Louvros et~al., 2023}
\bibcite{10.1007/978-3-319-61030-6_1}{Maher and Grace, 2017} 85 86 \bibcite{10.1007/978-3-319-61030-6_1}{Maher and Grace, 2017}
\bibcite{10.1007/978-3-031-63646-2_4}{Malburg et~al., 2024} 86 87 \bibcite{10.1007/978-3-031-63646-2_4}{Malburg et~al., 2024}
\bibcite{Liang}{Mang et~al., 2021} 87 88 \bibcite{Liang}{Mang et~al., 2021}
\bibcite{doi:10.1137/23M1592420}{Minsker and Strawn, 2024} 88 89 \bibcite{doi:10.1137/23M1592420}{Minsker and Strawn, 2024}
\bibcite{MUANGPRATHUB2020e05227}{Muangprathub et~al., 2020} 89 90 \bibcite{MUANGPRATHUB2020e05227}{Muangprathub et~al., 2020}
\bibcite{Muller}{Müller and Bergmann, 2015} 90 91 \bibcite{Muller}{Müller and Bergmann, 2015}
\bibcite{NGUYEN2024111566}{Nguyen, 2024} 91 92 \bibcite{NGUYEN2024111566}{Nguyen, 2024}
\bibcite{Nkambou}{Nkambou et~al., 2010} 92 93 \bibcite{Nkambou}{Nkambou et~al., 2010}
\bibcite{Obeid}{Obeid et~al., 2022} 93 94 \bibcite{Obeid}{Obeid et~al., 2022}
\bibcite{10.1007/978-3-319-24586-7_20}{Onta{\~{n}}{\'o}n et~al., 2015} 94 95 \bibcite{10.1007/978-3-319-24586-7_20}{Onta{\~{n}}{\'o}n et~al., 2015}
\bibcite{pmlr-v238-ou24a}{Ou et~al., 2024} 95 96 \bibcite{pmlr-v238-ou24a}{Ou et~al., 2024}
\bibcite{PAREJASLLANOVARCED2024111469}{Parejas-Llanovarced et~al., 2024} 96 97 \bibcite{PAREJASLLANOVARCED2024111469}{Parejas-Llanovarced et~al., 2024}
\bibcite{PETROVIC201617}{Petrovic et~al., 2016} 97 98 \bibcite{PETROVIC201617}{Petrovic et~al., 2016}
\bibcite{Richter2013}{Richter and Weber, 2013} 98 99 \bibcite{Richter2013}{Richter and Weber, 2013}
\bibcite{RICHTER20093}{Richter, 2009} 99 100 \bibcite{RICHTER20093}{Richter, 2009}
\bibcite{Robertson2014ARO}{Robertson and Watson, 2014} 100 101 \bibcite{Robertson2014ARO}{Robertson and Watson, 2014}
\bibcite{ROLDANREYES20151}{{Roldan Reyes} et~al., 2015} 101 102 \bibcite{ROLDANREYES20151}{{Roldan Reyes} et~al., 2015}
\bibcite{Sadeghi}{Sadeghi~Moghadam et~al., 2024} 102 103 \bibcite{Sadeghi}{Sadeghi~Moghadam et~al., 2024}
\bibcite{schank+abelson77}{Schank and Abelson, 1977} 103 104 \bibcite{schank+abelson77}{Schank and Abelson, 1977}
\bibcite{pmlr-v108-seznec20a}{Seznec et~al., 2020} 104 105 \bibcite{pmlr-v108-seznec20a}{Seznec et~al., 2020}
\bibcite{9072123}{Sinaga and Yang, 2020} 105 106 \bibcite{9072123}{Sinaga and Yang, 2020}
\bibcite{skittou2024recommender}{Skittou et~al., 2024} 106 107 \bibcite{skittou2024recommender}{Skittou et~al., 2024}
\bibcite{10.1007/978-3-030-01081-2_25}{Smyth and Cunningham, 2018} 107 108 \bibcite{10.1007/978-3-030-01081-2_25}{Smyth and Cunningham, 2018}
\bibcite{10.1007/978-3-030-58342-2_8}{Smyth and Willemsen, 2020} 108 109 \bibcite{10.1007/978-3-030-58342-2_8}{Smyth and Willemsen, 2020}
\bibcite{Soto2}{Soto-Forero et~al., 2024a} 109 110 \bibcite{Soto2}{Soto-Forero et~al., 2024a}
\bibcite{10.1007/978-3-031-63646-2_13}{Soto-Forero et~al., 2024b} 110 111 \bibcite{10.1007/978-3-031-63646-2_13}{Soto-Forero et~al., 2024b}
\bibcite{10.1007/978-3-031-63646-2_11}{Soto-Forero et~al., 2024c} 111 112 \bibcite{10.1007/978-3-031-63646-2_11}{Soto-Forero et~al., 2024c}
\bibcite{SU2022109547}{Su et~al., 2022} 112 113 \bibcite{SU2022109547}{Su et~al., 2022}
\bibcite{8495930}{Supic, 2018} 113 114 \bibcite{8495930}{Supic, 2018}
\bibcite{math12111758}{Uguina et~al., 2024} 114 115 \bibcite{math12111758}{Uguina et~al., 2024}
\bibcite{buildings13030651}{Uysal and Sonmez, 2023} 115 116 \bibcite{buildings13030651}{Uysal and Sonmez, 2023}
\bibcite{WANG2021331}{Wang et~al., 2021} 116 117 \bibcite{WANG2021331}{Wang et~al., 2021}
\bibcite{wolf2024keep}{Wolf et~al., 2024} 117 118 \bibcite{wolf2024keep}{Wolf et~al., 2024}
\bibcite{9627973}{Xu et~al., 2021} 118 119 \bibcite{9627973}{Xu et~al., 2021}
\bibcite{10.1145/3578337.3605122}{Xu et~al., 2023} 119 120 \bibcite{10.1145/3578337.3605122}{Xu et~al., 2023}
\bibcite{YU2023110163}{Yu and Li, 2023} 120 121 \bibcite{YU2023110163}{Yu and Li, 2023}
\bibcite{YU2024123745}{Yu et~al., 2024} 121 122 \bibcite{YU2024123745}{Yu et~al., 2024}
\bibcite{ZHANG2021100025}{Zhang. and Aslan, 2021} 122 123 \bibcite{ZHANG2021100025}{Zhang. and Aslan, 2021}
\bibcite{ZHANG2018189}{Zhang and Yao, 2018} 123 124 \bibcite{ZHANG2018189}{Zhang and Yao, 2018}
\bibcite{ZHANG2023110564}{Zhang et~al., 2023} 124 125 \bibcite{ZHANG2023110564}{Zhang et~al., 2023}
\bibcite{ZHAO2023118535}{Zhao et~al., 2023} 125 126 \bibcite{ZHAO2023118535}{Zhao et~al., 2023}
\bibcite{Zhou2021}{Zhou and Wang, 2021} 126 127 \bibcite{Zhou2021}{Zhou and Wang, 2021}
\bibcite{jmse10040464}{Zuluaga et~al., 2022} 127 128 \bibcite{jmse10040464}{Zuluaga et~al., 2022}
\gdef \@abspage@last{122} 128 129 \gdef \@abspage@last{122}
\begin{thebibliography}{} 1 1 \begin{thebibliography}{}
2 2
3 \bibitem[Dat, 2023]{Data}
4 (2023).
5 \newblock Jeu de données.
6 \newblock
7 \url{https://disc.univ-fcomte.fr/gitlab/daniel.soto_forero/ai-vt-recommender-system}.
8 \newblock Accessed: 2023-11-20.
9
\bibitem[UCI, 2024]{UCI} 3 10 \bibitem[UCI, 2024]{UCI}
(2024). 4 11 (2024).
\newblock Markelle kelly, rachel longjohn, kolby nottingham, the uci machine 5 12 \newblock Markelle kelly, rachel longjohn, kolby nottingham, the uci machine
learning repository. 6 13 learning repository.
\newblock \url{https://archive.ics.uci.edu}. 7 14 \newblock \url{https://archive.ics.uci.edu}.
\newblock Accessed: 2024-09-30. 8 15 \newblock Accessed: 2024-09-30.
9 16
\bibitem[Aamodt and Plaza, 1994]{doi:10.3233/AIC-1994-7104} 10 17 \bibitem[Aamodt and Plaza, 1994]{doi:10.3233/AIC-1994-7104}
Aamodt, A. and Plaza, E. (1994). 11 18 Aamodt, A. and Plaza, E. (1994).
\newblock Case-based reasoning: Foundational issues, methodological variations, 12 19 \newblock Case-based reasoning: Foundational issues, methodological variations,
and system approaches. 13 20 and system approaches.
\newblock {\em AI Communications}, 7(1):39--59. 14 21 \newblock {\em AI Communications}, 7(1):39--59.
15 22
\bibitem[Abel et~al., 2023]{NEURIPS2023_9d8cf124} 16 23 \bibitem[Abel et~al., 2023]{NEURIPS2023_9d8cf124}
Abel, D., Barreto, A., Van~Roy, B., Precup, D., van Hasselt, H.~P., and Singh, 17 24 Abel, D., Barreto, A., Van~Roy, B., Precup, D., van Hasselt, H.~P., and Singh,
S. (2023). 18 25 S. (2023).
\newblock A definition of continual reinforcement learning. 19 26 \newblock A definition of continual reinforcement learning.
\newblock In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and 20 27 \newblock In Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., and
Levine, S., editors, {\em Advances in Neural Information Processing Systems}, 21 28 Levine, S., editors, {\em Advances in Neural Information Processing Systems},
volume~36, pages 50377--50407. Curran Associates, Inc. 22 29 volume~36, pages 50377--50407. Curran Associates, Inc.
23 30
\bibitem[Alabdulrahman and Viktor, 2021]{ALABDULRAHMAN2021114061} 24 31 \bibitem[Alabdulrahman and Viktor, 2021]{ALABDULRAHMAN2021114061}
Alabdulrahman, R. and Viktor, H. (2021). 25 32 Alabdulrahman, R. and Viktor, H. (2021).
\newblock Catering for unique tastes: Targeting grey-sheep users recommender 26 33 \newblock Catering for unique tastes: Targeting grey-sheep users recommender
systems through one-class machine learning. 27 34 systems through one-class machine learning.
\newblock {\em Expert Systems with Applications}, 166:114061. 28 35 \newblock {\em Expert Systems with Applications}, 166:114061.
29 36
\bibitem[Arthurs et~al., 2019]{Arthurs} 30 37 \bibitem[Arthurs et~al., 2019]{Arthurs}
Arthurs, N., Stenhaug, B., Karayev, S., and Piech, C. (2019). 31 38 Arthurs, N., Stenhaug, B., Karayev, S., and Piech, C. (2019).
\newblock Grades are not normal: Improving exam score models using the 32 39 \newblock Grades are not normal: Improving exam score models using the
logit-normal distribution. 33 40 logit-normal distribution.
\newblock In {\em International Conference on Educational Data Mining (EDM)}, 34 41 \newblock In {\em International Conference on Educational Data Mining (EDM)},
page~6. 35 42 page~6.
36 43
\bibitem[Auer et~al., 2021]{Auer} 37 44 \bibitem[Auer et~al., 2021]{Auer}
Auer, F., Lenarduzzi, V., Felderer, M., and Taibi, D. (2021). 38 45 Auer, F., Lenarduzzi, V., Felderer, M., and Taibi, D. (2021).
\newblock From monolithic systems to microservices: An assessment framework. 39 46 \newblock From monolithic systems to microservices: An assessment framework.
\newblock {\em Information and Software Technology}, 137:106600. 40 47 \newblock {\em Information and Software Technology}, 137:106600.
41 48
\bibitem[Badier et~al., 2023]{badier:hal-04092828} 42 49 \bibitem[Badier et~al., 2023]{badier:hal-04092828}
Badier, A., Lefort, M., and Lefevre, M. (2023). 43 50 Badier, A., Lefort, M., and Lefevre, M. (2023).
\newblock {Comprendre les usages et effets d'un syst{\`e}me de recommandations 44 51 \newblock {Comprendre les usages et effets d'un syst{\`e}me de recommandations
p{\'e}dagogiques en contexte d'apprentissage non-formel}. 45 52 p{\'e}dagogiques en contexte d'apprentissage non-formel}.
\newblock In {\em {EIAH'23}}, Brest, France. 46 53 \newblock In {\em {EIAH'23}}, Brest, France.
47 54
\bibitem[Bakurov et~al., 2021]{BAKUROV2021100913} 48 55 \bibitem[Bakurov et~al., 2021]{BAKUROV2021100913}
Bakurov, I., Castelli, M., Gau, O., Fontanella, F., and Vanneschi, L. (2021). 49 56 Bakurov, I., Castelli, M., Gau, O., Fontanella, F., and Vanneschi, L. (2021).
\newblock Genetic programming for stacked generalization. 50 57 \newblock Genetic programming for stacked generalization.
\newblock {\em Swarm and Evolutionary Computation}, 65:100913. 51 58 \newblock {\em Swarm and Evolutionary Computation}, 65:100913.
52 59
\bibitem[Butdee and Tichkiewitch, 2011]{10.1007/978-3-642-15973-2_50} 53 60 \bibitem[Butdee and Tichkiewitch, 2011]{10.1007/978-3-642-15973-2_50}
Butdee, S. and Tichkiewitch, S. (2011). 54 61 Butdee, S. and Tichkiewitch, S. (2011).
\newblock Case-based reasoning for adaptive aluminum extrusion die design 55 62 \newblock Case-based reasoning for adaptive aluminum extrusion die design
together with parameters by neural networks. 56 63 together with parameters by neural networks.
\newblock In Bernard, A., editor, {\em Global Product Development}, pages 57 64 \newblock In Bernard, A., editor, {\em Global Product Development}, pages
491--496, Berlin, Heidelberg. Springer Berlin Heidelberg. 58 65 491--496, Berlin, Heidelberg. Springer Berlin Heidelberg.
59 66
\bibitem[Chiu et~al., 2023]{CHIU2023100118} 60 67 \bibitem[Chiu et~al., 2023]{CHIU2023100118}
Chiu, T.~K., Xia, Q., Zhou, X., Chai, C.~S., and Cheng, M. (2023). 61 68 Chiu, T.~K., Xia, Q., Zhou, X., Chai, C.~S., and Cheng, M. (2023).
\newblock Systematic literature review on opportunities, challenges, and future 62 69 \newblock Systematic literature review on opportunities, challenges, and future
research recommendations of artificial intelligence in education. 63 70 research recommendations of artificial intelligence in education.
\newblock {\em Computers and Education: Artificial Intelligence}, 4:100118. 64 71 \newblock {\em Computers and Education: Artificial Intelligence}, 4:100118.
65 72
\bibitem[Choi et~al., 2023]{cmc.2023.033417} 66 73 \bibitem[Choi et~al., 2023]{cmc.2023.033417}
Choi, J., Suh, D., and Otto, M.-O. (2023). 67 74 Choi, J., Suh, D., and Otto, M.-O. (2023).
\newblock Boosted stacking ensemble machine learning method for wafer map 68 75 \newblock Boosted stacking ensemble machine learning method for wafer map
pattern classification. 69 76 pattern classification.
\newblock {\em Computers, Materials \& Continua}, 74(2):2945--2966. 70 77 \newblock {\em Computers, Materials \& Continua}, 74(2):2945--2966.
71 78
\bibitem[C.K. and R.C., 1989]{Riesbeck1989} 72 79 \bibitem[C.K. and R.C., 1989]{Riesbeck1989}
C.K., R. and R.C., S. (1989). 73 80 C.K., R. and R.C., S. (1989).
\newblock {\em Inside Case-Based Reasoning}. 74 81 \newblock {\em Inside Case-Based Reasoning}.
\newblock Psychology Press. 75 82 \newblock Psychology Press.
76 83
\bibitem[Cunningham and Delany, 2021]{10.1145/3459665} 77 84 \bibitem[Cunningham and Delany, 2021]{10.1145/3459665}
Cunningham, P. and Delany, S.~J. (2021). 78 85 Cunningham, P. and Delany, S.~J. (2021).
\newblock K-nearest neighbour classifiers - a tutorial. 79 86 \newblock K-nearest neighbour classifiers - a tutorial.
\newblock {\em ACM Comput. Surv.}, 54(6). 80 87 \newblock {\em ACM Comput. Surv.}, 54(6).
81 88
\bibitem[Didden et~al., 2023]{DIDDEN2023338} 82 89 \bibitem[Didden et~al., 2023]{DIDDEN2023338}
Didden, J.~B., Dang, Q.-V., and Adan, I.~J. (2023). 83 90 Didden, J.~B., Dang, Q.-V., and Adan, I.~J. (2023).
\newblock Decentralized learning multi-agent system for online machine shop 84 91 \newblock Decentralized learning multi-agent system for online machine shop
scheduling problem. 85 92 scheduling problem.
\newblock {\em Journal of Manufacturing Systems}, 67:338--360. 86 93 \newblock {\em Journal of Manufacturing Systems}, 67:338--360.
87 94
\bibitem[Ezaldeen et~al., 2022]{EZALDEEN2022100700} 88 95 \bibitem[Ezaldeen et~al., 2022]{EZALDEEN2022100700}
Ezaldeen, H., Misra, R., Bisoy, S.~K., Alatrash, R., and Priyadarshini, R. 89 96 Ezaldeen, H., Misra, R., Bisoy, S.~K., Alatrash, R., and Priyadarshini, R.
(2022). 90 97 (2022).
\newblock A hybrid e-learning recommendation integrating adaptive profiling and 91 98 \newblock A hybrid e-learning recommendation integrating adaptive profiling and
sentiment analysis. 92 99 sentiment analysis.
\newblock {\em Journal of Web Semantics}, 72:100700. 93 100 \newblock {\em Journal of Web Semantics}, 72:100700.
94 101
\bibitem[Feely et~al., 2020]{10.1007/978-3-030-58342-2_5} 95 102 \bibitem[Feely et~al., 2020]{10.1007/978-3-030-58342-2_5}
Feely, C., Caulfield, B., Lawlor, A., and Smyth, B. (2020). 96 103 Feely, C., Caulfield, B., Lawlor, A., and Smyth, B. (2020).
\newblock Using case-based reasoning to predict marathon performance and 97 104 \newblock Using case-based reasoning to predict marathon performance and
recommend tailored training plans. 98 105 recommend tailored training plans.
\newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning 99 106 \newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning
Research and Development}, pages 67--81, Cham. Springer International 100 107 Research and Development}, pages 67--81, Cham. Springer International
Publishing. 101 108 Publishing.
102 109
\bibitem[Grace et~al., 2016]{10.1007/978-3-319-47096-2_11} 103 110 \bibitem[Grace et~al., 2016]{10.1007/978-3-319-47096-2_11}
Grace, K., Maher, M.~L., Wilson, D.~C., and Najjar, N.~A. (2016). 104 111 Grace, K., Maher, M.~L., Wilson, D.~C., and Najjar, N.~A. (2016).
\newblock Combining cbr and deep learning to generate surprising recipe 105 112 \newblock Combining cbr and deep learning to generate surprising recipe
designs. 106 113 designs.
\newblock In Goel, A., D{\'i}az-Agudo, M.~B., and Roth-Berghofer, T., editors, 107 114 \newblock In Goel, A., D{\'i}az-Agudo, M.~B., and Roth-Berghofer, T., editors,
{\em Case-Based Reasoning Research and Development}, pages 154--169, Cham. 108 115 {\em Case-Based Reasoning Research and Development}, pages 154--169, Cham.
Springer International Publishing. 109 116 Springer International Publishing.
110 117
\bibitem[Gupta et~al., 2021]{9434422} 111 118 \bibitem[Gupta et~al., 2021]{9434422}
Gupta, S., Chaudhari, S., Joshi, G., and Yağan, O. (2021). 112 119 Gupta, S., Chaudhari, S., Joshi, G., and Yağan, O. (2021).
\newblock Multi-armed bandits with correlated arms. 113 120 \newblock Multi-armed bandits with correlated arms.
\newblock {\em IEEE Transactions on Information Theory}, 67(10):6711--6732. 114 121 \newblock {\em IEEE Transactions on Information Theory}, 67(10):6711--6732.
115 122
\bibitem[Hajduk et~al., 2019]{hajduk2019cognitive} 116 123 \bibitem[Hajduk et~al., 2019]{hajduk2019cognitive}
Hajduk, M., Sukop, M., and Haun, M. (2019). 117 124 Hajduk, M., Sukop, M., and Haun, M. (2019).
\newblock {\em Cognitive Multi-agent Systems: Structures, Strategies and 118 125 \newblock {\em Cognitive Multi-agent Systems: Structures, Strategies and
Applications to Mobile Robotics and Robosoccer}. 119 126 Applications to Mobile Robotics and Robosoccer}.
\newblock Studies in Systems, Decision and Control. Springer International 120 127 \newblock Studies in Systems, Decision and Control. Springer International
Publishing. 121 128 Publishing.
122 129
\bibitem[Henriet et~al., 2017]{doi:10.1177/1754337116651013} 123 130 \bibitem[Henriet et~al., 2017]{doi:10.1177/1754337116651013}
Henriet, J., Christophe, L., and Laurent, P. (2017). 124 131 Henriet, J., Christophe, L., and Laurent, P. (2017).
\newblock Artificial intelligence-virtual trainer: An educative system based on 125 132 \newblock Artificial intelligence-virtual trainer: An educative system based on
artificial intelligence and designed to produce varied and consistent 126 133 artificial intelligence and designed to produce varied and consistent
training lessons. 127 134 training lessons.
\newblock {\em Proceedings of the Institution of Mechanical Engineers, Part P: 128 135 \newblock {\em Proceedings of the Institution of Mechanical Engineers, Part P:
Journal of Sports Engineering and Technology}, 231(2):110--124. 129 136 Journal of Sports Engineering and Technology}, 231(2):110--124.
130 137
\bibitem[Henriet and Greffier, 2018]{10.1007/978-3-030-01081-2_9} 131 138 \bibitem[Henriet and Greffier, 2018]{10.1007/978-3-030-01081-2_9}
Henriet, J. and Greffier, F. (2018). 132 139 Henriet, J. and Greffier, F. (2018).
\newblock Ai-vt: An example of cbr that generates a variety of solutions to the 133 140 \newblock Ai-vt: An example of cbr that generates a variety of solutions to the
same problem. 134 141 same problem.
\newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based 135 142 \newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based
Reasoning Research and Development}, pages 124--139, Cham. Springer 136 143 Reasoning Research and Development}, pages 124--139, Cham. Springer
International Publishing. 137 144 International Publishing.
138 145
\bibitem[Hipólito and Kirchhoff, 2023]{HIPOLITO2023103510} 139 146 \bibitem[Hipólito and Kirchhoff, 2023]{HIPOLITO2023103510}
Hipólito, I. and Kirchhoff, M. (2023). 140 147 Hipólito, I. and Kirchhoff, M. (2023).
\newblock Breaking boundaries: The bayesian brain hypothesis for perception and 141 148 \newblock Breaking boundaries: The bayesian brain hypothesis for perception and
prediction. 142 149 prediction.
\newblock {\em Consciousness and Cognition}, 111:103510. 143 150 \newblock {\em Consciousness and Cognition}, 111:103510.
144 151
\bibitem[Hoang, 2018]{Hoang} 145 152 \bibitem[Hoang, 2018]{Hoang}
Hoang, L. (2018). 146 153 Hoang, L. (2018).
\newblock {\em La formule du savoir. Une philosophie unifiée du savoir fondée 147 154 \newblock {\em La formule du savoir. Une philosophie unifiée du savoir fondée
sur le théorème de Bayes}. 148 155 sur le théorème de Bayes}.
\newblock EDP Sciences. 149 156 \newblock EDP Sciences.
150 157
\bibitem[Hu et~al., 2025]{HU2025127130} 151 158 \bibitem[Hu et~al., 2025]{HU2025127130}
Hu, B., Ma, Y., Liu, Z., and Wang, H. (2025). 152 159 Hu, B., Ma, Y., Liu, Z., and Wang, H. (2025).
\newblock A social importance and category enhanced cold-start user 153 160 \newblock A social importance and category enhanced cold-start user
recommendation system. 154 161 recommendation system.
\newblock {\em Expert Systems with Applications}, 277:127130. 155 162 \newblock {\em Expert Systems with Applications}, 277:127130.
156 163
\bibitem[Huang et~al., 2023]{HUANG2023104684} 157 164 \bibitem[Huang et~al., 2023]{HUANG2023104684}
Huang, A.~Y., Lu, O.~H., and Yang, S.~J. (2023). 158 165 Huang, A.~Y., Lu, O.~H., and Yang, S.~J. (2023).
\newblock Effects of artificial intelligence–enabled personalized 159 166 \newblock Effects of artificial intelligence–enabled personalized
recommendations on learners’ learning engagement, motivation, and outcomes 160 167 recommendations on learners’ learning engagement, motivation, and outcomes
in a flipped classroom. 161 168 in a flipped classroom.
\newblock {\em Computers and Education}, 194:104684. 162 169 \newblock {\em Computers and Education}, 194:104684.
163 170
\bibitem[Ingkavara et~al., 2022]{INGKAVARA2022100086} 164 171 \bibitem[Ingkavara et~al., 2022]{INGKAVARA2022100086}
Ingkavara, T., Panjaburee, P., Srisawasdi, N., and Sajjapanroj, S. (2022). 165 172 Ingkavara, T., Panjaburee, P., Srisawasdi, N., and Sajjapanroj, S. (2022).
\newblock The use of a personalized learning approach to implementing 166 173 \newblock The use of a personalized learning approach to implementing
self-regulated online learning. 167 174 self-regulated online learning.
\newblock {\em Computers and Education: Artificial Intelligence}, 3:100086. 168 175 \newblock {\em Computers and Education: Artificial Intelligence}, 3:100086.
169 176
\bibitem[Jean-Daubias, 2011]{Daubias2011} 170 177 \bibitem[Jean-Daubias, 2011]{Daubias2011}
Jean-Daubias, S. (2011). 171 178 Jean-Daubias, S. (2011).
\newblock Ingénierie des profils d'apprenants. 172 179 \newblock Ingénierie des profils d'apprenants.
173 180
\bibitem[Jung et~al., 2009]{JUNG20095695} 174 181 \bibitem[Jung et~al., 2009]{JUNG20095695}
Jung, S., Lim, T., and Kim, D. (2009). 175 182 Jung, S., Lim, T., and Kim, D. (2009).
\newblock Integrating radial basis function networks with case-based reasoning 176 183 \newblock Integrating radial basis function networks with case-based reasoning
for product design. 177 184 for product design.
\newblock {\em Expert Systems with Applications}, 36(3, Part 1):5695--5701. 178 185 \newblock {\em Expert Systems with Applications}, 36(3, Part 1):5695--5701.
179 186
\bibitem[Kamali et~al., 2023]{KAMALI2023110242} 180 187 \bibitem[Kamali et~al., 2023]{KAMALI2023110242}
Kamali, S.~R., Banirostam, T., Motameni, H., and Teshnehlab, M. (2023). 181 188 Kamali, S.~R., Banirostam, T., Motameni, H., and Teshnehlab, M. (2023).
\newblock An immune inspired multi-agent system for dynamic multi-objective 182 189 \newblock An immune inspired multi-agent system for dynamic multi-objective
optimization. 183 190 optimization.
\newblock {\em Knowledge-Based Systems}, 262:110242. 184 191 \newblock {\em Knowledge-Based Systems}, 262:110242.
185 192
\bibitem[Kim, 2024]{Kim2024} 186 193 \bibitem[Kim, 2024]{Kim2024}
Kim, W. (2024). 187 194 Kim, W. (2024).
\newblock A random focusing method with jensen--shannon divergence for 188 195 \newblock A random focusing method with jensen--shannon divergence for
improving deep neural network performance ensuring architecture consistency. 189 196 improving deep neural network performance ensuring architecture consistency.
\newblock {\em Neural Processing Letters}, 56(4):199. 190 197 \newblock {\em Neural Processing Letters}, 56(4):199.
191 198
\bibitem[Kolodner, 1983]{KOLODNER1983281} 192 199 \bibitem[Kolodner, 1983]{KOLODNER1983281}
Kolodner, J.~L. (1983). 193 200 Kolodner, J.~L. (1983).
\newblock Reconstructive memory: A computer model. 194 201 \newblock Reconstructive memory: A computer model.
\newblock {\em Cognitive Science}, 7(4):281--328. 195 202 \newblock {\em Cognitive Science}, 7(4):281--328.
196 203
\bibitem[Kuzilek et~al., 2017]{Kuzilek2017} 197 204 \bibitem[Kuzilek et~al., 2017]{Kuzilek2017}
Kuzilek, J., Hlosta, M., and Zdrahal, Z. (2017). 198 205 Kuzilek, J., Hlosta, M., and Zdrahal, Z. (2017).
\newblock Open university learning analytics dataset. 199 206 \newblock Open university learning analytics dataset.
\newblock {\em Scientific Data}, 4(1):170171. 200 207 \newblock {\em Scientific Data}, 4(1):170171.
201 208
\bibitem[Lalitha and Sreeja, 2020]{LALITHA2020583} 202 209 \bibitem[Lalitha and Sreeja, 2020]{LALITHA2020583}
Lalitha, T.~B. and Sreeja, P.~S. (2020). 203 210 Lalitha, T.~B. and Sreeja, P.~S. (2020).
\newblock Personalised self-directed learning recommendation system. 204 211 \newblock Personalised self-directed learning recommendation system.
\newblock {\em Procedia Computer Science}, 171:583--592. 205 212 \newblock {\em Procedia Computer Science}, 171:583--592.
\newblock Third International Conference on Computing and Network 206 213 \newblock Third International Conference on Computing and Network
Communications (CoCoNet'19). 207 214 Communications (CoCoNet'19).
208 215
\bibitem[Lei, 2024]{lei2024analysis} 209 216 \bibitem[Lei, 2024]{lei2024analysis}
Lei, Z. (2024). 210 217 Lei, Z. (2024).
\newblock Analysis of simpson’s paradox and its applications. 211 218 \newblock Analysis of simpson’s paradox and its applications.
\newblock {\em Highlights in Science, Engineering and Technology}, 88:357--362. 212 219 \newblock {\em Highlights in Science, Engineering and Technology}, 88:357--362.
213 220
\bibitem[Leikola et~al., 2018]{min8100434} 214 221 \bibitem[Leikola et~al., 2018]{min8100434}
Leikola, M., Sauer, C., Rintala, L., Aromaa, J., and Lundström, M. (2018). 215 222 Leikola, M., Sauer, C., Rintala, L., Aromaa, J., and Lundström, M. (2018).
\newblock Assessing the similarity of cyanide-free gold leaching processes: A 216 223 \newblock Assessing the similarity of cyanide-free gold leaching processes: A
case-based reasoning application. 217 224 case-based reasoning application.
\newblock {\em Minerals}, 8(10). 218 225 \newblock {\em Minerals}, 8(10).
219 226
\bibitem[Lepage et~al., 2020]{10.1007/978-3-030-58342-2_20} 220 227 \bibitem[Lepage et~al., 2020]{10.1007/978-3-030-58342-2_20}
Lepage, Y., Lieber, J., Mornard, I., Nauer, E., Romary, J., and Sies, R. 221 228 Lepage, Y., Lieber, J., Mornard, I., Nauer, E., Romary, J., and Sies, R.
(2020). 222 229 (2020).
\newblock The french correction: When retrieval is harder to specify than 223 230 \newblock The french correction: When retrieval is harder to specify than
adaptation. 224 231 adaptation.
\newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning 225 232 \newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning
Research and Development}, pages 309--324, Cham. Springer International 226 233 Research and Development}, pages 309--324, Cham. Springer International
Publishing. 227 234 Publishing.
228 235
\bibitem[Li et~al., 2024]{Li_2024} 229 236 \bibitem[Li et~al., 2024]{Li_2024}
Li, Z., Ding, Z., Yu, Y., and Zhang, P. (2024). 230 237 Li, Z., Ding, Z., Yu, Y., and Zhang, P. (2024).
\newblock The kullback–leibler divergence and the convergence rate of fast 231 238 \newblock The kullback–leibler divergence and the convergence rate of fast
covariance matrix estimators in galaxy clustering analysis. 232 239 covariance matrix estimators in galaxy clustering analysis.
\newblock {\em The Astrophysical Journal}, 965(2):125. 233 240 \newblock {\em The Astrophysical Journal}, 965(2):125.
234 241
\bibitem[Liang et~al., 2021]{10.3389/fgene.2021.600040} 235 242 \bibitem[Liang et~al., 2021]{10.3389/fgene.2021.600040}
Liang, M., Chang, T., An, B., Duan, X., Du, L., Wang, X., Miao, J., Xu, L., 236 243 Liang, M., Chang, T., An, B., Duan, X., Du, L., Wang, X., Miao, J., Xu, L.,
Gao, X., Zhang, L., Li, J., and Gao, H. (2021). 237 244 Gao, X., Zhang, L., Li, J., and Gao, H. (2021).
\newblock A stacking ensemble learning framework for genomic prediction. 238 245 \newblock A stacking ensemble learning framework for genomic prediction.
\newblock {\em Frontiers in Genetics}, 12. 239 246 \newblock {\em Frontiers in Genetics}, 12.
240 247
\bibitem[Lin, 2022]{9870279} 241 248 \bibitem[Lin, 2022]{9870279}
Lin, B. (2022). 242 249 Lin, B. (2022).
\newblock Evolutionary multi-armed bandits with genetic thompson sampling. 243 250 \newblock Evolutionary multi-armed bandits with genetic thompson sampling.
\newblock In {\em 2022 IEEE Congress on Evolutionary Computation (CEC)}, pages 244 251 \newblock In {\em 2022 IEEE Congress on Evolutionary Computation (CEC)}, pages
1--8. 245 252 1--8.
246 253
\bibitem[Liu and Yu, 2023]{Liu2023} 247 254 \bibitem[Liu and Yu, 2023]{Liu2023}
Liu, M. and Yu, D. (2023). 248 255 Liu, M. and Yu, D. (2023).
\newblock Towards intelligent e-learning systems. 249 256 \newblock Towards intelligent e-learning systems.
\newblock {\em Education and Information Technologies}, 28(7):7845--7876. 250 257 \newblock {\em Education and Information Technologies}, 28(7):7845--7876.
251 258
\bibitem[Louvros et~al., 2023]{jmse11050890} 252 259 \bibitem[Louvros et~al., 2023]{jmse11050890}
Louvros, P., Stefanidis, F., Boulougouris, E., Komianos, A., and Vassalos, D. 253 260 Louvros, P., Stefanidis, F., Boulougouris, E., Komianos, A., and Vassalos, D.
(2023). 254 261 (2023).
\newblock Machine learning and case-based reasoning for real-time onboard 255 262 \newblock Machine learning and case-based reasoning for real-time onboard
prediction of the survivability of ships. 256 263 prediction of the survivability of ships.
\newblock {\em Journal of Marine Science and Engineering}, 11(5). 257 264 \newblock {\em Journal of Marine Science and Engineering}, 11(5).
258 265
\bibitem[Maher and Grace, 2017]{10.1007/978-3-319-61030-6_1} 259 266 \bibitem[Maher and Grace, 2017]{10.1007/978-3-319-61030-6_1}
Maher, M.~L. and Grace, K. (2017). 260 267 Maher, M.~L. and Grace, K. (2017).
\newblock Encouraging curiosity in case-based reasoning and recommender 261 268 \newblock Encouraging curiosity in case-based reasoning and recommender
systems. 262 269 systems.
\newblock In Aha, D.~W. and Lieber, J., editors, {\em Case-Based Reasoning 263 270 \newblock In Aha, D.~W. and Lieber, J., editors, {\em Case-Based Reasoning
Research and Development}, pages 3--15, Cham. Springer International 264 271 Research and Development}, pages 3--15, Cham. Springer International
Publishing. 265 272 Publishing.
266 273
\bibitem[Malburg et~al., 2024]{10.1007/978-3-031-63646-2_4} 267 274 \bibitem[Malburg et~al., 2024]{10.1007/978-3-031-63646-2_4}
Malburg, L., Hotz, M., and Bergmann, R. (2024). 268 275 Malburg, L., Hotz, M., and Bergmann, R. (2024).
\newblock Improving complex adaptations in process-oriented case-based 269 276 \newblock Improving complex adaptations in process-oriented case-based
reasoning by applying rule-based adaptation. 270 277 reasoning by applying rule-based adaptation.
\newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D., 271 278 \newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D.,
editors, {\em Case-Based Reasoning Research and Development}, pages 50--66, 272 279 editors, {\em Case-Based Reasoning Research and Development}, pages 50--66,
Cham. Springer Nature Switzerland. 273 280 Cham. Springer Nature Switzerland.
274 281
\bibitem[Mang et~al., 2021]{Liang} 275 282 \bibitem[Mang et~al., 2021]{Liang}
Mang, L., Tianpeng, C., Bingxing, A., Xinghai, D., Lili, D., Xiaoqiao, W., 276 283 Mang, L., Tianpeng, C., Bingxing, A., Xinghai, D., Lili, D., Xiaoqiao, W.,
Jian, M., Lingyang, X., Xue, G., Lupei, Z., Junya, L., and Huijiang, G. 277 284 Jian, M., Lingyang, X., Xue, G., Lupei, Z., Junya, L., and Huijiang, G.
(2021). 278 285 (2021).
\newblock A stacking ensemble learning framework for genomic prediction. 279 286 \newblock A stacking ensemble learning framework for genomic prediction.
\newblock {\em Frontiers in Genetics}. 280 287 \newblock {\em Frontiers in Genetics}.
281 288
\bibitem[Minsker and Strawn, 2024]{doi:10.1137/23M1592420} 282 289 \bibitem[Minsker and Strawn, 2024]{doi:10.1137/23M1592420}
Minsker, S. and Strawn, N. (2024). 283 290 Minsker, S. and Strawn, N. (2024).
\newblock The geometric median and applications to robust mean estimation. 284 291 \newblock The geometric median and applications to robust mean estimation.
\newblock {\em SIAM Journal on Mathematics of Data Science}, 6(2):504--533. 285 292 \newblock {\em SIAM Journal on Mathematics of Data Science}, 6(2):504--533.
286 293
\bibitem[Muangprathub et~al., 2020]{MUANGPRATHUB2020e05227} 287 294 \bibitem[Muangprathub et~al., 2020]{MUANGPRATHUB2020e05227}
Muangprathub, J., Boonjing, V., and Chamnongthai, K. (2020). 288 295 Muangprathub, J., Boonjing, V., and Chamnongthai, K. (2020).
\newblock Learning recommendation with formal concept analysis for intelligent 289 296 \newblock Learning recommendation with formal concept analysis for intelligent
tutoring system. 290 297 tutoring system.
\newblock {\em Heliyon}, 6(10):e05227. 291 298 \newblock {\em Heliyon}, 6(10):e05227.
292 299
\bibitem[Müller and Bergmann, 2015]{Muller} 293 300 \bibitem[Müller and Bergmann, 2015]{Muller}
Müller, G. and Bergmann, R. (2015). 294 301 Müller, G. and Bergmann, R. (2015).
\newblock Cookingcake: A framework for the adaptation of cooking recipes 295 302 \newblock Cookingcake: A framework for the adaptation of cooking recipes
represented as workflows. 296 303 represented as workflows.
\newblock In {\em International Conference on Case-Based Reasoning}. 297 304 \newblock In {\em International Conference on Case-Based Reasoning}.
298 305
\bibitem[Nguyen, 2024]{NGUYEN2024111566} 299 306 \bibitem[Nguyen, 2024]{NGUYEN2024111566}
Nguyen, A. (2024). 300 307 Nguyen, A. (2024).
\newblock Dynamic metaheuristic selection via thompson sampling for online 301 308 \newblock Dynamic metaheuristic selection via thompson sampling for online
optimization. 302 309 optimization.
\newblock {\em Applied Soft Computing}, 158:111566. 303 310 \newblock {\em Applied Soft Computing}, 158:111566.
304 311
\bibitem[Nkambou et~al., 2010]{Nkambou} 305 312 \bibitem[Nkambou et~al., 2010]{Nkambou}
Nkambou, R., Bourdeau, J., and Mizoguchi, R. (2010). 306 313 Nkambou, R., Bourdeau, J., and Mizoguchi, R. (2010).
\newblock {\em Advances in Intelligent Tutoring Systems}. 307 314 \newblock {\em Advances in Intelligent Tutoring Systems}.
\newblock Springer Berlin, Heidelberg, 1 edition. 308 315 \newblock Springer Berlin, Heidelberg, 1 edition.
309 316
\bibitem[Obeid et~al., 2022]{Obeid} 310 317 \bibitem[Obeid et~al., 2022]{Obeid}
Obeid, C., Lahoud, C., Khoury, H.~E., and Champin, P. (2022). 311 318 Obeid, C., Lahoud, C., Khoury, H.~E., and Champin, P. (2022).
\newblock A novel hybrid recommender system approach for student academic 312 319 \newblock A novel hybrid recommender system approach for student academic
advising named cohrs, supported by case-based reasoning and ontology. 313 320 advising named cohrs, supported by case-based reasoning and ontology.
\newblock {\em Computer Science and Information Systems}, 19(2):979–1005. 314 321 \newblock {\em Computer Science and Information Systems}, 19(2):979–1005.
315 322
\bibitem[Onta{\~{n}}{\'o}n et~al., 2015]{10.1007/978-3-319-24586-7_20} 316 323 \bibitem[Onta{\~{n}}{\'o}n et~al., 2015]{10.1007/978-3-319-24586-7_20}
Onta{\~{n}}{\'o}n, S., Plaza, E., and Zhu, J. (2015). 317 324 Onta{\~{n}}{\'o}n, S., Plaza, E., and Zhu, J. (2015).
\newblock Argument-based case revision in cbr for story generation. 318 325 \newblock Argument-based case revision in cbr for story generation.
\newblock In H{\"u}llermeier, E. and Minor, M., editors, {\em Case-Based 319 326 \newblock In H{\"u}llermeier, E. and Minor, M., editors, {\em Case-Based
Reasoning Research and Development}, pages 290--305, Cham. Springer 320 327 Reasoning Research and Development}, pages 290--305, Cham. Springer
International Publishing. 321 328 International Publishing.
322 329
\bibitem[Ou et~al., 2024]{pmlr-v238-ou24a} 323 330 \bibitem[Ou et~al., 2024]{pmlr-v238-ou24a}
Ou, T., Cummings, R., and Avella~Medina, M. (2024). 324 331 Ou, T., Cummings, R., and Avella~Medina, M. (2024).
\newblock Thompson sampling itself is differentially private. 325 332 \newblock Thompson sampling itself is differentially private.
\newblock In Dasgupta, S., Mandt, S., and Li, Y., editors, {\em Proceedings of 326 333 \newblock In Dasgupta, S., Mandt, S., and Li, Y., editors, {\em Proceedings of
The 27th International Conference on Artificial Intelligence and Statistics}, 327 334 The 27th International Conference on Artificial Intelligence and Statistics},
volume 238 of {\em Proceedings of Machine Learning Research}, pages 328 335 volume 238 of {\em Proceedings of Machine Learning Research}, pages
1576--1584. PMLR. 329 336 1576--1584. PMLR.
330 337
\bibitem[Parejas-Llanovarced et~al., 2024]{PAREJASLLANOVARCED2024111469} 331 338 \bibitem[Parejas-Llanovarced et~al., 2024]{PAREJASLLANOVARCED2024111469}
Parejas-Llanovarced, H., Caro-Martínez, M., del Castillo, M. G.~O., and 332 339 Parejas-Llanovarced, H., Caro-Martínez, M., del Castillo, M. G.~O., and
Recio-García, J.~A. (2024). 333 340 Recio-García, J.~A. (2024).
\newblock Case-based selection of explanation methods for neural network image 334 341 \newblock Case-based selection of explanation methods for neural network image
classifiers. 335 342 classifiers.
\newblock {\em Knowledge-Based Systems}, 288:111469. 336 343 \newblock {\em Knowledge-Based Systems}, 288:111469.
337 344
\bibitem[Petrovic et~al., 2016]{PETROVIC201617} 338 345 \bibitem[Petrovic et~al., 2016]{PETROVIC201617}
Petrovic, S., Khussainova, G., and Jagannathan, R. (2016). 339 346 Petrovic, S., Khussainova, G., and Jagannathan, R. (2016).
\newblock Knowledge-light adaptation approaches in case-based reasoning for 340 347 \newblock Knowledge-light adaptation approaches in case-based reasoning for
radiotherapy treatment planning. 341 348 radiotherapy treatment planning.
\newblock {\em Artificial Intelligence in Medicine}, 68:17--28. 342 349 \newblock {\em Artificial Intelligence in Medicine}, 68:17--28.
343 350
\bibitem[Richter and Weber, 2013]{Richter2013} 344 351 \bibitem[Richter and Weber, 2013]{Richter2013}
Richter, M. and Weber, R. (2013). 345 352 Richter, M. and Weber, R. (2013).
\newblock {\em Case-Based Reasoning (A Textbook)}. 346 353 \newblock {\em Case-Based Reasoning (A Textbook)}.
\newblock Springer-Verlag GmbH. 347 354 \newblock Springer-Verlag GmbH.
348 355
\bibitem[Richter, 2009]{RICHTER20093} 349 356 \bibitem[Richter, 2009]{RICHTER20093}
Richter, M.~M. (2009). 350 357 Richter, M.~M. (2009).
\newblock The search for knowledge, contexts, and case-based reasoning. 351 358 \newblock The search for knowledge, contexts, and case-based reasoning.
\newblock {\em Engineering Applications of Artificial Intelligence}, 352 359 \newblock {\em Engineering Applications of Artificial Intelligence},
22(1):3--9. 353 360 22(1):3--9.
354 361
\bibitem[Robertson and Watson, 2014]{Robertson2014ARO} 355 362 \bibitem[Robertson and Watson, 2014]{Robertson2014ARO}
Robertson, G. and Watson, I.~D. (2014). 356 363 Robertson, G. and Watson, I.~D. (2014).
\newblock A review of real-time strategy game ai. 357 364 \newblock A review of real-time strategy game ai.
\newblock {\em AI Mag.}, 35:75--104. 358 365 \newblock {\em AI Mag.}, 35:75--104.
359 366
\bibitem[{Roldan Reyes} et~al., 2015]{ROLDANREYES20151} 360 367 \bibitem[{Roldan Reyes} et~al., 2015]{ROLDANREYES20151}
{Roldan Reyes}, E., Negny, S., {Cortes Robles}, G., and {Le Lann}, J. (2015). 361 368 {Roldan Reyes}, E., Negny, S., {Cortes Robles}, G., and {Le Lann}, J. (2015).
\newblock Improvement of online adaptation knowledge acquisition and reuse in 362 369 \newblock Improvement of online adaptation knowledge acquisition and reuse in
case-based reasoning: Application to process engineering design. 363 370 case-based reasoning: Application to process engineering design.
\newblock {\em Engineering Applications of Artificial Intelligence}, 41:1--16. 364 371 \newblock {\em Engineering Applications of Artificial Intelligence}, 41:1--16.
365 372
\bibitem[Sadeghi~Moghadam et~al., 2024]{Sadeghi} 366 373 \bibitem[Sadeghi~Moghadam et~al., 2024]{Sadeghi}
Sadeghi~Moghadam, M.~R., Jafarnejad, A., Heidary~Dahooie, J., and 367 374 Sadeghi~Moghadam, M.~R., Jafarnejad, A., Heidary~Dahooie, J., and
Ghasemian~Sahebi, I. (2024). 368 375 Ghasemian~Sahebi, I. (2024).
\newblock A hidden markov model based extended case-based reasoning algorithm 369 376 \newblock A hidden markov model based extended case-based reasoning algorithm
for relief materials demand forecasting. 370 377 for relief materials demand forecasting.
\newblock {\em Mathematics Interdisciplinary Research}, 9(1):89--109. 371 378 \newblock {\em Mathematics Interdisciplinary Research}, 9(1):89--109.
372 379
\bibitem[Schank and Abelson, 1977]{schank+abelson77} 373 380 \bibitem[Schank and Abelson, 1977]{schank+abelson77}
Schank, R.~C. and Abelson, R.~P. (1977). 374 381 Schank, R.~C. and Abelson, R.~P. (1977).
\newblock {\em Scripts, Plans, Goals and Understanding: an Inquiry into Human 375 382 \newblock {\em Scripts, Plans, Goals and Understanding: an Inquiry into Human
Knowledge Structures}. 376 383 Knowledge Structures}.
\newblock L. Erlbaum, Hillsdale, NJ. 377 384 \newblock L. Erlbaum, Hillsdale, NJ.
378 385
\bibitem[Seznec et~al., 2020]{pmlr-v108-seznec20a} 379 386 \bibitem[Seznec et~al., 2020]{pmlr-v108-seznec20a}
Seznec, J., Menard, P., Lazaric, A., and Valko, M. (2020). 380 387 Seznec, J., Menard, P., Lazaric, A., and Valko, M. (2020).
\newblock A single algorithm for both restless and rested rotting bandits. 381 388 \newblock A single algorithm for both restless and rested rotting bandits.
\newblock In Chiappa, S. and Calandra, R., editors, {\em Proceedings of the 382 389 \newblock In Chiappa, S. and Calandra, R., editors, {\em Proceedings of the
Twenty Third International Conference on Artificial Intelligence and 383 390 Twenty Third International Conference on Artificial Intelligence and
Statistics}, volume 108 of {\em Proceedings of Machine Learning Research}, 384 391 Statistics}, volume 108 of {\em Proceedings of Machine Learning Research},
pages 3784--3794. PMLR. 385 392 pages 3784--3794. PMLR.
386 393
\bibitem[Sinaga and Yang, 2020]{9072123} 387 394 \bibitem[Sinaga and Yang, 2020]{9072123}
Sinaga, K.~P. and Yang, M.-S. (2020). 388 395 Sinaga, K.~P. and Yang, M.-S. (2020).
\newblock Unsupervised k-means clustering algorithm. 389 396 \newblock Unsupervised k-means clustering algorithm.
\newblock {\em IEEE Access}, 8:80716--80727. 390 397 \newblock {\em IEEE Access}, 8:80716--80727.
391 398
\bibitem[Skittou et~al., 2024]{skittou2024recommender} 392 399 \bibitem[Skittou et~al., 2024]{skittou2024recommender}
Skittou, M., Merrouchi, M., and Gadi, T. (2024). 393 400 Skittou, M., Merrouchi, M., and Gadi, T. (2024).
\newblock A recommender system for educational planning. 394 401 \newblock A recommender system for educational planning.
\newblock {\em Cybernetics and Information Technologies}, 24(2):67--85. 395 402 \newblock {\em Cybernetics and Information Technologies}, 24(2):67--85.
396 403
\bibitem[Smyth and Cunningham, 2018]{10.1007/978-3-030-01081-2_25} 397 404 \bibitem[Smyth and Cunningham, 2018]{10.1007/978-3-030-01081-2_25}
Smyth, B. and Cunningham, P. (2018). 398 405 Smyth, B. and Cunningham, P. (2018).
\newblock An analysis of case representations for marathon race prediction and 399 406 \newblock An analysis of case representations for marathon race prediction and
planning. 400 407 planning.
\newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based 401 408 \newblock In Cox, M.~T., Funk, P., and Begum, S., editors, {\em Case-Based
Reasoning Research and Development}, pages 369--384, Cham. Springer 402 409 Reasoning Research and Development}, pages 369--384, Cham. Springer
International Publishing. 403 410 International Publishing.
404 411
\bibitem[Smyth and Willemsen, 2020]{10.1007/978-3-030-58342-2_8} 405 412 \bibitem[Smyth and Willemsen, 2020]{10.1007/978-3-030-58342-2_8}
Smyth, B. and Willemsen, M.~C. (2020). 406 413 Smyth, B. and Willemsen, M.~C. (2020).
\newblock Predicting the personal-best times of speed skaters using case-based 407 414 \newblock Predicting the personal-best times of speed skaters using case-based
reasoning. 408 415 reasoning.
\newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning 409 416 \newblock In Watson, I. and Weber, R., editors, {\em Case-Based Reasoning
Research and Development}, pages 112--126, Cham. Springer International 410 417 Research and Development}, pages 112--126, Cham. Springer International
Publishing. 411 418 Publishing.
412 419
\bibitem[Soto-Forero et~al., 2024a]{Soto2} 413 420 \bibitem[Soto-Forero et~al., 2024a]{Soto2}
Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024a). 414 421 Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024a).
\newblock Automatic real-time adaptation of training session difficulty using 415 422 \newblock Automatic real-time adaptation of training session difficulty using
rules and reinforcement learning in the ai-vt its. 416 423 rules and reinforcement learning in the ai-vt its.
\newblock {\em International Journal of Modern Education and Computer 417 424 \newblock {\em International Journal of Modern Education and Computer
Science(IJMECS)}, 16:56--71. 418 425 Science(IJMECS)}, 16:56--71.
419 426
\bibitem[Soto-Forero et~al., 2024b]{10.1007/978-3-031-63646-2_13} 420 427 \bibitem[Soto-Forero et~al., 2024b]{10.1007/978-3-031-63646-2_13}
Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024b). 421 428 Soto-Forero, D., Ackermann, S., Betbeder, M.-L., and Henriet, J. (2024b).
\newblock The intelligent tutoring system ai-vt with case-based reasoning and 422 429 \newblock The intelligent tutoring system ai-vt with case-based reasoning and
real time recommender models. 423 430 real time recommender models.
\newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D., 424 431 \newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D.,
editors, {\em Case-Based Reasoning Research and Development}, pages 191--205, 425 432 editors, {\em Case-Based Reasoning Research and Development}, pages 191--205,
Cham. Springer Nature Switzerland. 426 433 Cham. Springer Nature Switzerland.
427 434
\bibitem[Soto-Forero et~al., 2024c]{10.1007/978-3-031-63646-2_11} 428 435 \bibitem[Soto-Forero et~al., 2024c]{10.1007/978-3-031-63646-2_11}
Soto-Forero, D., Betbeder, M.-L., and Henriet, J. (2024c). 429 436 Soto-Forero, D., Betbeder, M.-L., and Henriet, J. (2024c).
\newblock Ensemble stacking case-based reasoning for regression. 430 437 \newblock Ensemble stacking case-based reasoning for regression.
\newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D., 431 438 \newblock In Recio-Garcia, J.~A., Orozco-del Castillo, M.~G., and Bridge, D.,
editors, {\em Case-Based Reasoning Research and Development}, pages 159--174, 432 439 editors, {\em Case-Based Reasoning Research and Development}, pages 159--174,
Cham. Springer Nature Switzerland. 433 440 Cham. Springer Nature Switzerland.
434 441
\bibitem[Su et~al., 2022]{SU2022109547} 435 442 \bibitem[Su et~al., 2022]{SU2022109547}
Su, Y., Cheng, Z., Wu, J., Dong, Y., Huang, Z., Wu, L., Chen, E., Wang, S., and 436 443 Su, Y., Cheng, Z., Wu, J., Dong, Y., Huang, Z., Wu, L., Chen, E., Wang, S., and
Xie, F. (2022). 437 444 Xie, F. (2022).
\newblock Graph-based cognitive diagnosis for intelligent tutoring systems. 438 445 \newblock Graph-based cognitive diagnosis for intelligent tutoring systems.
\newblock {\em Knowledge-Based Systems}, 253:109547. 439 446 \newblock {\em Knowledge-Based Systems}, 253:109547.
440 447
\bibitem[Supic, 2018]{8495930} 441 448 \bibitem[Supic, 2018]{8495930}
Supic, H. (2018). 442 449 Supic, H. (2018).
\newblock Case-based reasoning model for personalized learning path 443 450 \newblock Case-based reasoning model for personalized learning path
recommendation in example-based learning activities. 444 451 recommendation in example-based learning activities.
\newblock In {\em 2018 IEEE 27th International Conference on Enabling 445 452 \newblock In {\em 2018 IEEE 27th International Conference on Enabling
Technologies: Infrastructure for Collaborative Enterprises (WETICE)}, pages 446 453 Technologies: Infrastructure for Collaborative Enterprises (WETICE)}, pages
175--178. 447 454 175--178.
448 455
\bibitem[Uguina et~al., 2024]{math12111758} 449 456 \bibitem[Uguina et~al., 2024]{math12111758}
Uguina, A.~R., Gomez, J.~F., Panadero, J., Martínez-Gavara, A., and Juan, 450 457 Uguina, A.~R., Gomez, J.~F., Panadero, J., Martínez-Gavara, A., and Juan,
@article{ZHANG2021100025, 1 1 @article{ZHANG2021100025,
title = {AI technologies for education: Recent research and future directions}, 2 2 title = {AI technologies for education: Recent research and future directions},
journal = {Computers and Education: Artificial Intelligence}, 3 3 journal = {Computers and Education: Artificial Intelligence},
volume = {2}, 4 4 volume = {2},
pages = {100025}, 5 5 pages = {100025},
language = {English}, 6 6 language = {English},
year = {2021}, 7 7 year = {2021},
issn = {2666-920X}, 8 8 issn = {2666-920X},
type = {article}, 9 9 type = {article},
doi = {https://doi.org/10.1016/j.caeai.2021.100025}, 10 10 doi = {https://doi.org/10.1016/j.caeai.2021.100025},
url = {https://www.sciencedirect.com/science/article/pii/S2666920X21000199}, 11 11 url = {https://www.sciencedirect.com/science/article/pii/S2666920X21000199},
author = {Ke Zhang. and Ayse Begum Aslan}, 12 12 author = {Ke Zhang. and Ayse Begum Aslan},
address={USA}, 13 13 address={USA},
affiliation={Wayne State University; Eastern Michigan University}, 14 14 affiliation={Wayne State University; Eastern Michigan University},
keywords = {Artificial intelligence, AI, AI in Education}, 15 15 keywords = {Artificial intelligence, AI, AI in Education},
abstract = {From unique educational perspectives, this article reports a comprehensive review of selected empirical studies on artificial intelligence in education (AIEd) published in 1993–2020, as collected in the Web of Sciences database and selected AIEd-specialized journals. A total of 40 empirical studies met all selection criteria, and were fully reviewed using multiple methods, including selected bibliometrics, content analysis and categorical meta-trends analysis. This article reports the current state of AIEd research, highlights selected AIEd technologies and applications, reviews their proven and potential benefits for education, bridges the gaps between AI technological innovations and their educational applications, and generates practical examples and inspirations for both technological experts that create AIEd technologies and educators who spearhead AI innovations in education. It also provides rich discussions on practical implications and future research directions from multiple perspectives. The advancement of AIEd calls for critical initiatives to address AI ethics and privacy concerns, and requires interdisciplinary and transdisciplinary collaborations in large-scaled, longitudinal research and development efforts.} 16 16 abstract = {From unique educational perspectives, this article reports a comprehensive review of selected empirical studies on artificial intelligence in education (AIEd) published in 1993–2020, as collected in the Web of Sciences database and selected AIEd-specialized journals. A total of 40 empirical studies met all selection criteria, and were fully reviewed using multiple methods, including selected bibliometrics, content analysis and categorical meta-trends analysis. This article reports the current state of AIEd research, highlights selected AIEd technologies and applications, reviews their proven and potential benefits for education, bridges the gaps between AI technological innovations and their educational applications, and generates practical examples and inspirations for both technological experts that create AIEd technologies and educators who spearhead AI innovations in education. It also provides rich discussions on practical implications and future research directions from multiple perspectives. The advancement of AIEd calls for critical initiatives to address AI ethics and privacy concerns, and requires interdisciplinary and transdisciplinary collaborations in large-scaled, longitudinal research and development efforts.}
} 17 17 }
18 18
@article{PETROVIC201617, 19 19 @article{PETROVIC201617,
title = {Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning}, 20 20 title = {Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning},
journal = {Artificial Intelligence in Medicine}, 21 21 journal = {Artificial Intelligence in Medicine},
volume = {68}, 22 22 volume = {68},
pages = {17-28}, 23 23 pages = {17-28},
year = {2016}, 24 24 year = {2016},
language = {English}, 25 25 language = {English},
issn = {0933-3657}, 26 26 issn = {0933-3657},
type = {article}, 27 27 type = {article},
doi = {https://doi.org/10.1016/j.artmed.2016.01.006}, 28 28 doi = {https://doi.org/10.1016/j.artmed.2016.01.006},
url = {https://www.sciencedirect.com/science/article/pii/S093336571630015X}, 29 29 url = {https://www.sciencedirect.com/science/article/pii/S093336571630015X},
author = {Sanja Petrovic and Gulmira Khussainova and Rupa Jagannathan}, 30 30 author = {Sanja Petrovic and Gulmira Khussainova and Rupa Jagannathan},
affiliation={Nottingham University}, 31 31 affiliation={Nottingham University},
address={UK}, 32 32 address={UK},
keywords = {Case-based reasoning, Adaptation-guided retrieval, Machine-learning tools, Radiotherapy treatment planning}, 33 33 keywords = {Case-based reasoning, Adaptation-guided retrieval, Machine-learning tools, Radiotherapy treatment planning},
abstract = {Objective 34 34 abstract = {Objective
Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation. 35 35 Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation.
Methodology 36 36 Methodology
We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case. 37 37 We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case.
Results 38 38 Results
The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed. 39 39 The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed.
Conclusions 40 40 Conclusions
The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base.} 41 41 The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base.}
} 42 42 }
43 43
@article{ROLDANREYES20151, 44 44 @article{ROLDANREYES20151,
title = {Improvement of online adaptation knowledge acquisition and reuse in case-based reasoning: Application to process engineering design}, 45 45 title = {Improvement of online adaptation knowledge acquisition and reuse in case-based reasoning: Application to process engineering design},
journal = {Engineering Applications of Artificial Intelligence}, 46 46 journal = {Engineering Applications of Artificial Intelligence},
volume = {41}, 47 47 volume = {41},
pages = {1-16}, 48 48 pages = {1-16},
affiliation={Université de Toulouse; Instituto Tecnologico de Orizaba}, 49 49 affiliation={Université de Toulouse; Instituto Tecnologico de Orizaba},
country={France}, 50 50 country={France},
language = {English}, 51 51 language = {English},
year = {2015}, 52 52 year = {2015},
type = {article}, 53 53 type = {article},
issn = {0952-1976}, 54 54 issn = {0952-1976},
doi = {https://doi.org/10.1016/j.engappai.2015.01.015}, 55 55 doi = {https://doi.org/10.1016/j.engappai.2015.01.015},
url = {https://www.sciencedirect.com/science/article/pii/S0952197615000263}, 56 56 url = {https://www.sciencedirect.com/science/article/pii/S0952197615000263},
author = {E. {Roldan Reyes} and S. Negny and G. {Cortes Robles} and J.M. {Le Lann}}, 57 57 author = {E. {Roldan Reyes} and S. Negny and G. {Cortes Robles} and J.M. {Le Lann}},
keywords = {Case based reasoning, Constraint satisfaction problems, Interactive adaptation method, Online knowledge acquisition, Failure diagnosis and repair}, 58 58 keywords = {Case based reasoning, Constraint satisfaction problems, Interactive adaptation method, Online knowledge acquisition, Failure diagnosis and repair},
abstract = {Despite various publications in the area during the last few years, the adaptation step is still a crucial phase for a relevant and reasonable Case Based Reasoning system. Furthermore, the online acquisition of the new adaptation knowledge is of particular interest as it enables the progressive improvement of the system while reducing the knowledge engineering effort without constraints for the expert. Therefore this paper presents a new interactive method for adaptation knowledge elicitation, acquisition and reuse, thanks to a modification of the traditional CBR cycle. Moreover to improve adaptation knowledge reuse, a test procedure is also implemented to help the user in the adaptation step and its diagnosis during adaptation failure. A study on the quality and usefulness of the new knowledge acquired is also driven. As our Knowledge Based Systems (KBS) is more focused on preliminary design, and more particularly in the field of process engineering, we need to unify in the same method two types of knowledge: contextual and general. To realize this, this article proposes the integration of the Constraint Satisfaction Problem (based on general knowledge) approach into the Case Based Reasoning (based on contextual knowledge) process to improve the case representation and the adaptation of past experiences. To highlight its capability, the proposed approach is illustrated through a case study dedicated to the design of an industrial mixing device.} 59 59 abstract = {Despite various publications in the area during the last few years, the adaptation step is still a crucial phase for a relevant and reasonable Case Based Reasoning system. Furthermore, the online acquisition of the new adaptation knowledge is of particular interest as it enables the progressive improvement of the system while reducing the knowledge engineering effort without constraints for the expert. Therefore this paper presents a new interactive method for adaptation knowledge elicitation, acquisition and reuse, thanks to a modification of the traditional CBR cycle. Moreover to improve adaptation knowledge reuse, a test procedure is also implemented to help the user in the adaptation step and its diagnosis during adaptation failure. A study on the quality and usefulness of the new knowledge acquired is also driven. As our Knowledge Based Systems (KBS) is more focused on preliminary design, and more particularly in the field of process engineering, we need to unify in the same method two types of knowledge: contextual and general. To realize this, this article proposes the integration of the Constraint Satisfaction Problem (based on general knowledge) approach into the Case Based Reasoning (based on contextual knowledge) process to improve the case representation and the adaptation of past experiences. To highlight its capability, the proposed approach is illustrated through a case study dedicated to the design of an industrial mixing device.}
} 60 60 }
61 61
@article{JUNG20095695, 62 62 @article{JUNG20095695,
title = {Integrating radial basis function networks with case-based reasoning for product design}, 63 63 title = {Integrating radial basis function networks with case-based reasoning for product design},
journal = {Expert Systems with Applications}, 64 64 journal = {Expert Systems with Applications},
volume = {36}, 65 65 volume = {36},
number = {3, Part 1}, 66 66 number = {3, Part 1},
language = {English}, 67 67 language = {English},
pages = {5695-5701}, 68 68 pages = {5695-5701},
year = {2009}, 69 69 year = {2009},
type = {article}, 70 70 type = {article},
issn = {0957-4174}, 71 71 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2008.06.099}, 72 72 doi = {https://doi.org/10.1016/j.eswa.2008.06.099},
url = {https://www.sciencedirect.com/science/article/pii/S0957417408003667}, 73 73 url = {https://www.sciencedirect.com/science/article/pii/S0957417408003667},
author = {Sabum Jung and Taesoo Lim and Dongsoo Kim}, 74 74 author = {Sabum Jung and Taesoo Lim and Dongsoo Kim},
affiliation={LG Production Engineering Research Institute; Sungkyul University; Soongsil University}, 75 75 affiliation={LG Production Engineering Research Institute; Sungkyul University; Soongsil University},
keywords = {Case-based reasoning (CBR), Radial basis function network (RBFN), Design expert system, Product design}, 76 76 keywords = {Case-based reasoning (CBR), Radial basis function network (RBFN), Design expert system, Product design},
abstract = {This paper presents a case-based design expert system that automatically determines the design values of a product. We focus on the design problem of a shadow mask which is a core component of monitors in the electronics industry. In case-based reasoning (CBR), it is important to retrieve similar cases and adapt them to meet design specifications exactly. Notably, difficulties in automating the adaptation process have prevented designers from being able to use design expert systems easily and efficiently. In this paper, we present a hybrid approach combining CBR and artificial neural networks in order to solve the problems occurring during the adaptation process. We first constructed a radial basis function network (RBFN) composed of representative cases created by K-means clustering. Then, the representative case most similar to the current problem was adjusted using the network. The rationale behind the proposed approach is discussed, and experimental results acquired from real shadow mask design are presented. Using the design expert system, designers can reduce design time and errors and enhance the total quality of design. Furthermore, the expert system facilitates effective sharing of design knowledge among designers.} 77 77 abstract = {This paper presents a case-based design expert system that automatically determines the design values of a product. We focus on the design problem of a shadow mask which is a core component of monitors in the electronics industry. In case-based reasoning (CBR), it is important to retrieve similar cases and adapt them to meet design specifications exactly. Notably, difficulties in automating the adaptation process have prevented designers from being able to use design expert systems easily and efficiently. In this paper, we present a hybrid approach combining CBR and artificial neural networks in order to solve the problems occurring during the adaptation process. We first constructed a radial basis function network (RBFN) composed of representative cases created by K-means clustering. Then, the representative case most similar to the current problem was adjusted using the network. The rationale behind the proposed approach is discussed, and experimental results acquired from real shadow mask design are presented. Using the design expert system, designers can reduce design time and errors and enhance the total quality of design. Furthermore, the expert system facilitates effective sharing of design knowledge among designers.}
} 78 78 }
79 79
@article{CHIU2023100118, 80 80 @article{CHIU2023100118,
title = {Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education}, 81 81 title = {Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education},
journal = {Computers and Education: Artificial Intelligence}, 82 82 journal = {Computers and Education: Artificial Intelligence},
volume = {4}, 83 83 volume = {4},
language = {English}, 84 84 language = {English},
type = {article}, 85 85 type = {article},
pages = {100118}, 86 86 pages = {100118},
year = {2023}, 87 87 year = {2023},
issn = {2666-920X}, 88 88 issn = {2666-920X},
doi = {https://doi.org/10.1016/j.caeai.2022.100118}, 89 89 doi = {https://doi.org/10.1016/j.caeai.2022.100118},
url = {https://www.sciencedirect.com/science/article/pii/S2666920X2200073X}, 90 90 url = {https://www.sciencedirect.com/science/article/pii/S2666920X2200073X},
author = {Thomas K.F. Chiu and Qi Xia and Xinyan Zhou and Ching Sing Chai and Miaoting Cheng}, 91 91 author = {Thomas K.F. Chiu and Qi Xia and Xinyan Zhou and Ching Sing Chai and Miaoting Cheng},
keywords = {Artificial intelligence, Artificial intelligence in education, Systematic review, Learning, Teaching, Assessment}, 92 92 keywords = {Artificial intelligence, Artificial intelligence in education, Systematic review, Learning, Teaching, Assessment},
abstract = {Applications of artificial intelligence in education (AIEd) are emerging and are new to researchers and practitioners alike. Reviews of the relevant literature have not examined how AI technologies have been integrated into each of the four key educational domains of learning, teaching, assessment, and administration. The relationships between the technologies and learning outcomes for students and teachers have also been neglected. This systematic review study aims to understand the opportunities and challenges of AIEd by examining the literature from the last 10 years (2012–2021) using matrix coding and content analysis approaches. The results present the current focus of AIEd research by identifying 13 roles of AI technologies in the key educational domains, 7 learning outcomes of AIEd, and 10 major challenges. The review also provides suggestions for future directions of AIEd research.} 93 93 abstract = {Applications of artificial intelligence in education (AIEd) are emerging and are new to researchers and practitioners alike. Reviews of the relevant literature have not examined how AI technologies have been integrated into each of the four key educational domains of learning, teaching, assessment, and administration. The relationships between the technologies and learning outcomes for students and teachers have also been neglected. This systematic review study aims to understand the opportunities and challenges of AIEd by examining the literature from the last 10 years (2012–2021) using matrix coding and content analysis approaches. The results present the current focus of AIEd research by identifying 13 roles of AI technologies in the key educational domains, 7 learning outcomes of AIEd, and 10 major challenges. The review also provides suggestions for future directions of AIEd research.}
} 94 94 }
95 95
@article{Robertson2014ARO, 96 96 @article{Robertson2014ARO,
title = {A Review of Real-Time Strategy Game AI}, 97 97 title = {A Review of Real-Time Strategy Game AI},
author = {Glen Robertson and Ian D. Watson}, 98 98 author = {Glen Robertson and Ian D. Watson},
affiliation = {University of Auckland }, 99 99 affiliation = {University of Auckland },
keywords = {Game, IA, Real-time strategy}, 100 100 keywords = {Game, IA, Real-time strategy},
type={article}, 101 101 type={article},
language={English}, 102 102 language={English},
abstract = {This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies.}, 103 103 abstract = {This literature review covers AI techniques used for real-time strategy video games, focusing specifically on StarCraft. It finds that the main areas of current academic research are in tactical and strategic decision making, plan recognition, and learning, and it outlines the research contributions in each of these areas. The paper then contrasts the use of game AI in academe and industry, finding the academic research heavily focused on creating game-winning agents, while the industry aims to maximize player enjoyment. It finds that industry adoption of academic research is low because it is either inapplicable or too time-consuming and risky to implement in a new game, which highlights an area for potential investigation: bridging the gap between academe and industry. Finally, the areas of spatial reasoning, multiscale AI, and cooperation are found to require future work, and standardized evaluation methods are proposed to produce comparable results between studies.},
journal = {AI Mag.}, 104 104 journal = {AI Mag.},
year = {2014}, 105 105 year = {2014},
volume = {35}, 106 106 volume = {35},
pages = {75-104} 107 107 pages = {75-104}
} 108 108 }
109 109
@Inproceedings{10.1007/978-3-642-15973-2_50, 110 110 @Inproceedings{10.1007/978-3-642-15973-2_50,
author={Butdee, S. 111 111 author={Butdee, S.
and Tichkiewitch, S.}, 112 112 and Tichkiewitch, S.},
affiliation={University of Technology North Bangkok; Grenoble Institute of Technology}, 113 113 affiliation={University of Technology North Bangkok; Grenoble Institute of Technology},
editor={Bernard, Alain}, 114 114 editor={Bernard, Alain},
title={Case-Based Reasoning for Adaptive Aluminum Extrusion Die Design Together with Parameters by Neural Networks}, 115 115 title={Case-Based Reasoning for Adaptive Aluminum Extrusion Die Design Together with Parameters by Neural Networks},
keywords={Adaptive die design and parameters, Optimal aluminum extrusion, Case-based reasoning, Neural networks}, 116 116 keywords={Adaptive die design and parameters, Optimal aluminum extrusion, Case-based reasoning, Neural networks},
booktitle={Global Product Development}, 117 117 booktitle={Global Product Development},
year={2011}, 118 118 year={2011},
type = {article; proceedings paper}, 119 119 type = {article; proceedings paper},
language = {English}, 120 120 language = {English},
publisher = {Springer Berlin Heidelberg}, 121 121 publisher = {Springer Berlin Heidelberg},
address = {Berlin, Heidelberg}, 122 122 address = {Berlin, Heidelberg},
pages = {491--496}, 123 123 pages = {491--496},
abstract = {Nowadays Aluminum extrusion die design is a critical task for improving productivity which involves with quality, time and cost. Case-Based Reasoning (CBR) method has been successfully applied to support the die design process in order to design a new die by tackling previous problems together with their solutions to match with a new similar problem. Such solutions are selected and modified to solve the present problem. However, the applications of the CBR are useful only retrieving previous features whereas the critical parameters are missing. In additions, the experience learning to such parameters are limited. This chapter proposes Artificial Neural Network (ANN) to associate the CBR in order to learning previous parameters and predict to the new die design according to the primitive die modification. The most satisfactory is to accommodate the optimal parameters of extrusion processes.}, 124 124 abstract = {Nowadays Aluminum extrusion die design is a critical task for improving productivity which involves with quality, time and cost. Case-Based Reasoning (CBR) method has been successfully applied to support the die design process in order to design a new die by tackling previous problems together with their solutions to match with a new similar problem. Such solutions are selected and modified to solve the present problem. However, the applications of the CBR are useful only retrieving previous features whereas the critical parameters are missing. In additions, the experience learning to such parameters are limited. This chapter proposes Artificial Neural Network (ANN) to associate the CBR in order to learning previous parameters and predict to the new die design according to the primitive die modification. The most satisfactory is to accommodate the optimal parameters of extrusion processes.},
isbn = {978-3-642-15973-2} 125 125 isbn = {978-3-642-15973-2}
} 126 126 }
127 127
@Inproceedings{10.1007/978-3-319-47096-2_11, 128 128 @Inproceedings{10.1007/978-3-319-47096-2_11,
author={Grace, Kazjon 129 129 author={Grace, Kazjon
and Maher, Mary Lou 130 130 and Maher, Mary Lou
and Wilson, David C. 131 131 and Wilson, David C.
and Najjar, Nadia A.}, 132 132 and Najjar, Nadia A.},
affiliation={University of North Carolina at Charlotte}, 133 133 affiliation={University of North Carolina at Charlotte},
editor={Goel, Ashok 134 134 editor={Goel, Ashok
and D{\'i}az-Agudo, M Bel{\'e}n 135 135 and D{\'i}az-Agudo, M Bel{\'e}n
and Roth-Berghofer, Thomas}, 136 136 and Roth-Berghofer, Thomas},
title={Combining CBR and Deep Learning to Generate Surprising Recipe Designs}, 137 137 title={Combining CBR and Deep Learning to Generate Surprising Recipe Designs},
keywords={Case-based reasoning, deep learning, recipe design}, 138 138 keywords={Case-based reasoning, deep learning, recipe design},
type = {article; proceedings paper}, 139 139 type = {article; proceedings paper},
booktitle={Case-Based Reasoning Research and Development}, 140 140 booktitle={Case-Based Reasoning Research and Development},
year={2016}, 141 141 year={2016},
publisher={Springer International Publishing}, 142 142 publisher={Springer International Publishing},
address={Cham}, 143 143 address={Cham},
language = {English}, 144 144 language = {English},
pages={154--169}, 145 145 pages={154--169},
abstract={This paper presents a dual-cycle CBR model in the domain of recipe generation. The model combines the strengths of deep learning and similarity-based retrieval to generate recipes that are novel and valuable (i.e. they are creative). The first cycle generates abstract descriptions which we call ``design concepts'' by synthesizing expectations from the entire case base, while the second cycle uses those concepts to retrieve and adapt objects. We define these conceptual object representations as an abstraction over complete cases on which expectations can be formed, allowing objects to be evaluated for surprisingness (the peak level of unexpectedness in the object, given the case base) and plausibility (the overall similarity of the object to those in the case base). The paper presents a prototype implementation of the model, and demonstrates its ability to generate objects that are simultaneously plausible and surprising, in addition to fitting a user query. This prototype is then compared to a traditional single-cycle CBR system.}, 146 146 abstract={This paper presents a dual-cycle CBR model in the domain of recipe generation. The model combines the strengths of deep learning and similarity-based retrieval to generate recipes that are novel and valuable (i.e. they are creative). The first cycle generates abstract descriptions which we call ``design concepts'' by synthesizing expectations from the entire case base, while the second cycle uses those concepts to retrieve and adapt objects. We define these conceptual object representations as an abstraction over complete cases on which expectations can be formed, allowing objects to be evaluated for surprisingness (the peak level of unexpectedness in the object, given the case base) and plausibility (the overall similarity of the object to those in the case base). The paper presents a prototype implementation of the model, and demonstrates its ability to generate objects that are simultaneously plausible and surprising, in addition to fitting a user query. This prototype is then compared to a traditional single-cycle CBR system.},
isbn={978-3-319-47096-2} 147 147 isbn={978-3-319-47096-2}
} 148 148 }
149 149
@Inproceedings{10.1007/978-3-319-61030-6_1, 150 150 @Inproceedings{10.1007/978-3-319-61030-6_1,
author={Maher, Mary Lou 151 151 author={Maher, Mary Lou
and Grace, Kazjon}, 152 152 and Grace, Kazjon},
editor={Aha, David W. 153 153 editor={Aha, David W.
and Lieber, Jean}, 154 154 and Lieber, Jean},
affiliation={University of North Carolina at Charlotte}, 155 155 affiliation={University of North Carolina at Charlotte},
title={Encouraging Curiosity in Case-Based Reasoning and Recommender Systems}, 156 156 title={Encouraging Curiosity in Case-Based Reasoning and Recommender Systems},
keywords={Curiosity, Case-based reasoning, Recommender systems}, 157 157 keywords={Curiosity, Case-based reasoning, Recommender systems},
booktitle={Case-Based Reasoning Research and Development}, 158 158 booktitle={Case-Based Reasoning Research and Development},
year={2017}, 159 159 year={2017},
publisher={Springer International Publishing}, 160 160 publisher={Springer International Publishing},
address={Cham}, 161 161 address={Cham},
pages={3--15}, 162 162 pages={3--15},
language = {English}, 163 163 language = {English},
type = {article; proceedings paper}, 164 164 type = {article; proceedings paper},
abstract={A key benefit of case-based reasoning (CBR) and recommender systems is the use of past experience to guide the synthesis or selection of the best solution for a specific context or user. Typically, the solution presented to the user is based on a value system that privileges the closest match in a query and the solution that performs best when evaluated according to predefined requirements. In domains in which creativity is desirable or the user is engaged in a learning activity, there is a benefit to moving beyond the expected or ``best match'' and include results based on computational models of novelty and surprise. In this paper, models of novelty and surprise are integrated with both CBR and Recommender Systems to encourage user curiosity.}, 165 165 abstract={A key benefit of case-based reasoning (CBR) and recommender systems is the use of past experience to guide the synthesis or selection of the best solution for a specific context or user. Typically, the solution presented to the user is based on a value system that privileges the closest match in a query and the solution that performs best when evaluated according to predefined requirements. In domains in which creativity is desirable or the user is engaged in a learning activity, there is a benefit to moving beyond the expected or ``best match'' and include results based on computational models of novelty and surprise. In this paper, models of novelty and surprise are integrated with both CBR and Recommender Systems to encourage user curiosity.},
isbn={978-3-319-61030-6} 166 166 isbn={978-3-319-61030-6}
} 167 167 }
168 168
@Inproceedings{Muller, 169 169 @Inproceedings{Muller,
author = {Müller, G. and Bergmann, R.}, 170 170 author = {Müller, G. and Bergmann, R.},
affiliation={University of Trier}, 171 171 affiliation={University of Trier},
year = {2015}, 172 172 year = {2015},
month = {01}, 173 173 month = {01},
language = {English}, 174 174 language = {English},
type = {article; proceedings paper}, 175 175 type = {article; proceedings paper},
abstract = {This paper presents CookingCAKE,a framework for the adaptation of cooking recipes represented as workflows. CookingCAKE integrates and combines several workflow adaptation approaches applied in process-oriented case based reasoning (POCBR) in a single adaptation framework, thus providing a capable tool for the adaptation of cooking recipes. The available case base of cooking workflows is analyzed to generate adaptation knowledge which is used to adapt a recipe regarding restrictions and resources, which the user may define for the preparation of a dish.}, 176 176 abstract = {This paper presents CookingCAKE,a framework for the adaptation of cooking recipes represented as workflows. CookingCAKE integrates and combines several workflow adaptation approaches applied in process-oriented case based reasoning (POCBR) in a single adaptation framework, thus providing a capable tool for the adaptation of cooking recipes. The available case base of cooking workflows is analyzed to generate adaptation knowledge which is used to adapt a recipe regarding restrictions and resources, which the user may define for the preparation of a dish.},
booktitle = {International Conference on Case-Based Reasoning}, 177 177 booktitle = {International Conference on Case-Based Reasoning},
title = {CookingCAKE: A Framework for the adaptation of cooking recipes represented as workflows}, 178 178 title = {CookingCAKE: A Framework for the adaptation of cooking recipes represented as workflows},
keywords={recipe adaptation, workflow adaptation, workflows, process-oriented, case based reasoning} 179 179 keywords={recipe adaptation, workflow adaptation, workflows, process-oriented, case based reasoning}
} 180 180 }
181 181
@Inproceedings{10.1007/978-3-319-24586-7_20, 182 182 @Inproceedings{10.1007/978-3-319-24586-7_20,
author={Onta{\~{n}}{\'o}n, S. 183 183 author={Onta{\~{n}}{\'o}n, S.
and Plaza, E. 184 184 and Plaza, E.
and Zhu, J.}, 185 185 and Zhu, J.},
editor={H{\"u}llermeier, Eyke 186 186 editor={H{\"u}llermeier, Eyke
and Minor, Mirjam}, 187 187 and Minor, Mirjam},
affiliation={Drexel University; Artificial Intelligence Research Institute CSIC}, 188 188 affiliation={Drexel University; Artificial Intelligence Research Institute CSIC},
title={Argument-Based Case Revision in CBR for Story Generation}, 189 189 title={Argument-Based Case Revision in CBR for Story Generation},
keywords={CBR, Case-based reasoning, Story generation}, 190 190 keywords={CBR, Case-based reasoning, Story generation},
booktitle={Case-Based Reasoning Research and Development}, 191 191 booktitle={Case-Based Reasoning Research and Development},
year={2015}, 192 192 year={2015},
publisher={Springer International Publishing}, 193 193 publisher={Springer International Publishing},
address={Cham}, 194 194 address={Cham},
language = {English}, 195 195 language = {English},
pages={290--305}, 196 196 pages={290--305},
type = {article; proceedings paper}, 197 197 type = {article; proceedings paper},
abstract={This paper presents a new approach to case revision in case-based reasoning based on the idea of argumentation. Previous work on case reuse has proposed the use of operations such as case amalgamation (or merging), which generate solutions by combining information coming from different cases. Such approaches are often based on exploring the search space of possible combinations looking for a solution that maximizes a certain criteria. We show how Revise can be performed by arguments attacking specific parts of a case produced by Reuse, and how they can guide and prevent repeating pitfalls in future cases. The proposed approach is evaluated in the task of automatic story generation.}, 198 198 abstract={This paper presents a new approach to case revision in case-based reasoning based on the idea of argumentation. Previous work on case reuse has proposed the use of operations such as case amalgamation (or merging), which generate solutions by combining information coming from different cases. Such approaches are often based on exploring the search space of possible combinations looking for a solution that maximizes a certain criteria. We show how Revise can be performed by arguments attacking specific parts of a case produced by Reuse, and how they can guide and prevent repeating pitfalls in future cases. The proposed approach is evaluated in the task of automatic story generation.},
isbn={978-3-319-24586-7} 199 199 isbn={978-3-319-24586-7}
} 200 200 }
201 201
@Inproceedings{10.1007/978-3-030-58342-2_20, 202 202 @Inproceedings{10.1007/978-3-030-58342-2_20,
author={Lepage, Yves 203 203 author={Lepage, Yves
and Lieber, Jean 204 204 and Lieber, Jean
and Mornard, Isabelle 205 205 and Mornard, Isabelle
and Nauer, Emmanuel 206 206 and Nauer, Emmanuel
and Romary, Julien 207 207 and Romary, Julien
and Sies, Reynault}, 208 208 and Sies, Reynault},
editor={Watson, Ian 209 209 editor={Watson, Ian
and Weber, Rosina}, 210 210 and Weber, Rosina},
title={The French Correction: When Retrieval Is Harder to Specify than Adaptation}, 211 211 title={The French Correction: When Retrieval Is Harder to Specify than Adaptation},
affiliation={Waseda University; Université de Lorraine}, 212 212 affiliation={Waseda University; Université de Lorraine},
keywords={case-based reasoning, retrieval, analogy, sentence correction}, 213 213 keywords={case-based reasoning, retrieval, analogy, sentence correction},
booktitle={Case-Based Reasoning Research and Development}, 214 214 booktitle={Case-Based Reasoning Research and Development},
year={2020}, 215 215 year={2020},
language = {English}, 216 216 language = {English},
type = {article; proceedings paper}, 217 217 type = {article; proceedings paper},
publisher={Springer International Publishing}, 218 218 publisher={Springer International Publishing},
address={Cham}, 219 219 address={Cham},
pages={309--324}, 220 220 pages={309--324},
abstract={A common idea in the field of case-based reasoning is that the retrieval step can be specified by the use of some similarity measure: the retrieved cases maximize the similarity to the target problem and, then, the adaptation step has to take into account the mismatches between the retrieved cases and the target problem in order to this latter. The use of this methodological schema for the application described in this paper has proven to be non efficient. Indeed, designing a retrieval procedure without the precise knowledge of the adaptation procedure has not been possible. The domain of this application is the correction of French sentences: a problem is an incorrect sentence and a valid solution is a correction of this problem. Adaptation consists in solving an analogical equation that enables to execute the correction of the retrieved case on the target problem. Thus, retrieval has to ensure that this application is feasible. The first version of such a retrieval procedure is described and evaluated: it is a knowledge-light procedure that does not use linguistic knowledge about French.}, 221 221 abstract={A common idea in the field of case-based reasoning is that the retrieval step can be specified by the use of some similarity measure: the retrieved cases maximize the similarity to the target problem and, then, the adaptation step has to take into account the mismatches between the retrieved cases and the target problem in order to this latter. The use of this methodological schema for the application described in this paper has proven to be non efficient. Indeed, designing a retrieval procedure without the precise knowledge of the adaptation procedure has not been possible. The domain of this application is the correction of French sentences: a problem is an incorrect sentence and a valid solution is a correction of this problem. Adaptation consists in solving an analogical equation that enables to execute the correction of the retrieved case on the target problem. Thus, retrieval has to ensure that this application is feasible. The first version of such a retrieval procedure is described and evaluated: it is a knowledge-light procedure that does not use linguistic knowledge about French.},
isbn={978-3-030-58342-2} 222 222 isbn={978-3-030-58342-2}
} 223 223 }
224 224
@Inproceedings{10.1007/978-3-030-01081-2_25, 225 225 @Inproceedings{10.1007/978-3-030-01081-2_25,
author={Smyth, Barry 226 226 author={Smyth, Barry
and Cunningham, P{\'a}draig}, 227 227 and Cunningham, P{\'a}draig},
editor={Cox, Michael T. 228 228 editor={Cox, Michael T.
and Funk, Peter 229 229 and Funk, Peter
and Begum, Shahina}, 230 230 and Begum, Shahina},
affiliation={University College Dublin}, 231 231 affiliation={University College Dublin},
title={An Analysis of Case Representations for Marathon Race Prediction and Planning}, 232 232 title={An Analysis of Case Representations for Marathon Race Prediction and Planning},
keywords={Marathon planning, Case representation, Case-based reasoning}, 233 233 keywords={Marathon planning, Case representation, Case-based reasoning},
booktitle={Case-Based Reasoning Research and Development}, 234 234 booktitle={Case-Based Reasoning Research and Development},
year={2018}, 235 235 year={2018},
language = {English}, 236 236 language = {English},
publisher={Springer International Publishing}, 237 237 publisher={Springer International Publishing},
address={Cham}, 238 238 address={Cham},
pages={369--384}, 239 239 pages={369--384},
type = {article; proceedings paper}, 240 240 type = {article; proceedings paper},
abstract={We use case-based reasoning to help marathoners achieve a personal best for an upcoming race, by helping them to select an achievable goal-time and a suitable pacing plan. We evaluate several case representations and, using real-world race data, highlight their performance implications. Richer representations do not always deliver better prediction performance, but certain representational configurations do offer very significant practical benefits for runners, when it comes to predicting, and planning for, challenging goal-times during an upcoming race.}, 241 241 abstract={We use case-based reasoning to help marathoners achieve a personal best for an upcoming race, by helping them to select an achievable goal-time and a suitable pacing plan. We evaluate several case representations and, using real-world race data, highlight their performance implications. Richer representations do not always deliver better prediction performance, but certain representational configurations do offer very significant practical benefits for runners, when it comes to predicting, and planning for, challenging goal-times during an upcoming race.},
isbn={978-3-030-01081-2} 242 242 isbn={978-3-030-01081-2}
} 243 243 }
244 244
@Inproceedings{10.1007/978-3-030-58342-2_8, 245 245 @Inproceedings{10.1007/978-3-030-58342-2_8,
author={Smyth, Barry 246 246 author={Smyth, Barry
and Willemsen, Martijn C.}, 247 247 and Willemsen, Martijn C.},
editor={Watson, Ian 248 248 editor={Watson, Ian
and Weber, Rosina}, 249 249 and Weber, Rosina},
affiliation={University College Dublin; Eindhoven University of Technology}, 250 250 affiliation={University College Dublin; Eindhoven University of Technology},
title={Predicting the Personal-Best Times of Speed Skaters Using Case-Based Reasoning}, 251 251 title={Predicting the Personal-Best Times of Speed Skaters Using Case-Based Reasoning},
keywords={CBR for health and exercise, speed skating, race-time prediction, case representation}, 252 252 keywords={CBR for health and exercise, speed skating, race-time prediction, case representation},
booktitle={Case-Based Reasoning Research and Development}, 253 253 booktitle={Case-Based Reasoning Research and Development},
year={2020}, 254 254 year={2020},
type = {article; proceedings paper}, 255 255 type = {article; proceedings paper},
language = {English}, 256 256 language = {English},
publisher={Springer International Publishing}, 257 257 publisher={Springer International Publishing},
address={Cham}, 258 258 address={Cham},
pages={112--126}, 259 259 pages={112--126},
abstract={Speed skating is a form of ice skating in which the skaters race each other over a variety of standardised distances. Races take place on specialised ice-rinks and the type of track and ice conditions can have a significant impact on race-times. As race distances increase, pacing also plays an important role. In this paper we seek to extend recent work on the application of case-based reasoning to marathon-time prediction by predicting race-times for speed skaters. In particular, we propose and evaluate a number of case-based reasoning variants based on different case and feature representations to generate track-specific race predictions. We show it is possible to improve upon state-of-the-art prediction accuracy by harnessing richer case representations using shorter races and track-adjusted finish and lap-times.}, 260 260 abstract={Speed skating is a form of ice skating in which the skaters race each other over a variety of standardised distances. Races take place on specialised ice-rinks and the type of track and ice conditions can have a significant impact on race-times. As race distances increase, pacing also plays an important role. In this paper we seek to extend recent work on the application of case-based reasoning to marathon-time prediction by predicting race-times for speed skaters. In particular, we propose and evaluate a number of case-based reasoning variants based on different case and feature representations to generate track-specific race predictions. We show it is possible to improve upon state-of-the-art prediction accuracy by harnessing richer case representations using shorter races and track-adjusted finish and lap-times.},
isbn={978-3-030-58342-2} 261 261 isbn={978-3-030-58342-2}
} 262 262 }
263 263
@Inproceedings{10.1007/978-3-030-58342-2_5, 264 264 @Inproceedings{10.1007/978-3-030-58342-2_5,
author={Feely, Ciara 265 265 author={Feely, Ciara
and Caulfield, Brian 266 266 and Caulfield, Brian
and Lawlor, Aonghus 267 267 and Lawlor, Aonghus
and Smyth, Barry}, 268 268 and Smyth, Barry},
editor={Watson, Ian 269 269 editor={Watson, Ian
and Weber, Rosina}, 270 270 and Weber, Rosina},
affiliation={University College Dublin}, 271 271 affiliation={University College Dublin},
title={Using Case-Based Reasoning to Predict Marathon Performance and Recommend Tailored Training Plans}, 272 272 title={Using Case-Based Reasoning to Predict Marathon Performance and Recommend Tailored Training Plans},
keywords={CBR for health and exercise, marathon running, race-time prediction, plan recommendation}, 273 273 keywords={CBR for health and exercise, marathon running, race-time prediction, plan recommendation},
booktitle={Case-Based Reasoning Research and Development}, 274 274 booktitle={Case-Based Reasoning Research and Development},
year={2020}, 275 275 year={2020},
language = {English}, 276 276 language = {English},
publisher={Springer International Publishing}, 277 277 publisher={Springer International Publishing},
address={Cham}, 278 278 address={Cham},
pages={67--81}, 279 279 pages={67--81},
type = {article; proceedings paper}, 280 280 type = {article; proceedings paper},
abstract={Training for the marathon, especially a first marathon, is always a challenge. Many runners struggle to find the right balance between their workouts and their recovery, often leading to sub-optimal performance on race-day or even injury during training. We describe and evaluate a novel case-based reasoning system to help marathon runners as they train in two ways. First, it uses a case-base of training/workouts and race histories to predict future marathon times for a target runner, throughout their training program, helping runners to calibrate their progress and, ultimately, plan their race-day pacing. Second, the system recommends tailored training plans to runners, adapted for their current goal-time target, and based on the training plans of similar runners who have achieved this time. We evaluate the system using a dataset of more than 21,000 unique runners and 1.5 million training/workout sessions.}, 281 281 abstract={Training for the marathon, especially a first marathon, is always a challenge. Many runners struggle to find the right balance between their workouts and their recovery, often leading to sub-optimal performance on race-day or even injury during training. We describe and evaluate a novel case-based reasoning system to help marathon runners as they train in two ways. First, it uses a case-base of training/workouts and race histories to predict future marathon times for a target runner, throughout their training program, helping runners to calibrate their progress and, ultimately, plan their race-day pacing. Second, the system recommends tailored training plans to runners, adapted for their current goal-time target, and based on the training plans of similar runners who have achieved this time. We evaluate the system using a dataset of more than 21,000 unique runners and 1.5 million training/workout sessions.},
isbn={978-3-030-58342-2} 282 282 isbn={978-3-030-58342-2}
} 283 283 }
284 284
@article{LALITHA2020583, 285 285 @article{LALITHA2020583,
title = {Personalised Self-Directed Learning Recommendation System}, 286 286 title = {Personalised Self-Directed Learning Recommendation System},
journal = {Procedia Computer Science}, 287 287 journal = {Procedia Computer Science},
volume = {171}, 288 288 volume = {171},
pages = {583-592}, 289 289 pages = {583-592},
year = {2020}, 290 290 year = {2020},
type = {article}, 291 291 type = {article},
language = {English}, 292 292 language = {English},
note = {Third International Conference on Computing and Network Communications (CoCoNet'19)}, 293 293 note = {Third International Conference on Computing and Network Communications (CoCoNet'19)},
issn = {1877-0509}, 294 294 issn = {1877-0509},
doi = {https://doi.org/10.1016/j.procs.2020.04.063}, 295 295 doi = {https://doi.org/10.1016/j.procs.2020.04.063},
url = {https://www.sciencedirect.com/science/article/pii/S1877050920310309}, 296 296 url = {https://www.sciencedirect.com/science/article/pii/S1877050920310309},
author = {T B Lalitha and P S Sreeja}, 297 297 author = {T B Lalitha and P S Sreeja},
affiliation={Hindustan Institute of Technology and Science}, 298 298 affiliation={Hindustan Institute of Technology and Science},
keywords = {e-Learning, PSDLR, Recommendation System, SDL, Self-Directed Learning}, 299 299 keywords = {e-Learning, PSDLR, Recommendation System, SDL, Self-Directed Learning},
abstract = {Modern educational systems have changed drastically bringing in knowledge anywhere as needed by the learner with the evolution of Internet. Availability of knowledge in public domain, capability of exchanging large amount of information and filtering relevant information quickly has enabled disruption to conventional educational system. Thus, future trends are looking towards E-Learning (Electronic Learning) and M-Learning (Mobile Learning) technologies over the Internet for their vast knowledge acquisition. In this paper, the work gives an elaborate context of learning strategies prevailing and emerging with the classification of e-learning Techniques. It majorly focuses on the features and variety of aspects with the e-learning and the choice of learning method involved and facilitate the adoption of new ways for personalized selection on learning resources for SDL (Self-Directed Learning) from the unstructured, large web-based environment. Thereby, proposes a Personalised Self-Directed Learning Recommendation System (PSDLR) based on the personal specifications of the SDL learner. The result offers insight into the perspectives and challenges of Self-Directed Learning based on cognitive and constructive characteristics which majorly incorporates web-based learning and gives path in finding appropriate solutions using machine learning techniques and ontology for the open problems in the respective fields with personalised recommendations and guidelines for future research.} 300 300 abstract = {Modern educational systems have changed drastically bringing in knowledge anywhere as needed by the learner with the evolution of Internet. Availability of knowledge in public domain, capability of exchanging large amount of information and filtering relevant information quickly has enabled disruption to conventional educational system. Thus, future trends are looking towards E-Learning (Electronic Learning) and M-Learning (Mobile Learning) technologies over the Internet for their vast knowledge acquisition. In this paper, the work gives an elaborate context of learning strategies prevailing and emerging with the classification of e-learning Techniques. It majorly focuses on the features and variety of aspects with the e-learning and the choice of learning method involved and facilitate the adoption of new ways for personalized selection on learning resources for SDL (Self-Directed Learning) from the unstructured, large web-based environment. Thereby, proposes a Personalised Self-Directed Learning Recommendation System (PSDLR) based on the personal specifications of the SDL learner. The result offers insight into the perspectives and challenges of Self-Directed Learning based on cognitive and constructive characteristics which majorly incorporates web-based learning and gives path in finding appropriate solutions using machine learning techniques and ontology for the open problems in the respective fields with personalised recommendations and guidelines for future research.}
} 301 301 }
302 302
@article{Zhou2021, 303 303 @article{Zhou2021,
author={Zhou, Lina 304 304 author={Zhou, Lina
and Wang, Chunxia}, 305 305 and Wang, Chunxia},
affiliation={Baotou Medical College}, 306 306 affiliation={Baotou Medical College},
title={Research on Recommendation of Personalized Exercises in English Learning Based on Data Mining}, 307 307 title={Research on Recommendation of Personalized Exercises in English Learning Based on Data Mining},
journal={Scientific Programming}, 308 308 journal={Scientific Programming},
year={2021}, 309 309 year={2021},
month={Dec}, 310 310 month={Dec},
type = {article}, 311 311 type = {article},
language = {English}, 312 312 language = {English},
day={21}, 313 313 day={21},
publisher={Hindawi}, 314 314 publisher={Hindawi},
keywords={Recommender systems, Learning}, 315 315 keywords={Recommender systems, Learning},
volume={2021}, 316 316 volume={2021},
pages={5042286}, 317 317 pages={5042286},
abstract={Aiming at the problems of traditional method of exercise recommendation precision, recall rate, long recommendation time, and poor recommendation comprehensiveness, this study proposes a personalized exercise recommendation method for English learning based on data mining. Firstly, a personalized recommendation model is designed, based on the model to preprocess the data in the Web access log, and cleaning the noise data to avoid its impact on the accuracy of the recommendation results is focused; secondly, the DINA model to diagnose the degree of mastery of students{\&}{\#}x2019; knowledge points is used and the students{\&}{\#}x2019; browsing patterns through fuzzy similar relationships are clustered; and finally, according to the clustering results, the similarity between students and the similarity between exercises are measured, and the collaborative filtering recommendation of personalized exercises for English learning is realized. The experimental results show that the exercise recommendation precision and recall rate of this method are higher, the recommendation time is shorter, and the recommendation results are comprehensive.}, 318 318 abstract={Aiming at the problems of traditional method of exercise recommendation precision, recall rate, long recommendation time, and poor recommendation comprehensiveness, this study proposes a personalized exercise recommendation method for English learning based on data mining. Firstly, a personalized recommendation model is designed, based on the model to preprocess the data in the Web access log, and cleaning the noise data to avoid its impact on the accuracy of the recommendation results is focused; secondly, the DINA model to diagnose the degree of mastery of students{\&}{\#}x2019; knowledge points is used and the students{\&}{\#}x2019; browsing patterns through fuzzy similar relationships are clustered; and finally, according to the clustering results, the similarity between students and the similarity between exercises are measured, and the collaborative filtering recommendation of personalized exercises for English learning is realized. The experimental results show that the exercise recommendation precision and recall rate of this method are higher, the recommendation time is shorter, and the recommendation results are comprehensive.},
issn={1058-9244}, 319 319 issn={1058-9244},
doi={10.1155/2021/5042286}, 320 320 doi={10.1155/2021/5042286},
url={https://doi.org/10.1155/2021/5042286} 321 321 url={https://doi.org/10.1155/2021/5042286}
} 322 322 }
323 323
@article{INGKAVARA2022100086, 324 324 @article{INGKAVARA2022100086,
title = {The use of a personalized learning approach to implementing self-regulated online learning}, 325 325 title = {The use of a personalized learning approach to implementing self-regulated online learning},
journal = {Computers and Education: Artificial Intelligence}, 326 326 journal = {Computers and Education: Artificial Intelligence},
volume = {3}, 327 327 volume = {3},
pages = {100086}, 328 328 pages = {100086},
type = {article}, 329 329 type = {article},
language = {English}, 330 330 language = {English},
year = {2022}, 331 331 year = {2022},
issn = {2666-920X}, 332 332 issn = {2666-920X},
doi = {https://doi.org/10.1016/j.caeai.2022.100086}, 333 333 doi = {https://doi.org/10.1016/j.caeai.2022.100086},
url = {https://www.sciencedirect.com/science/article/pii/S2666920X22000418}, 334 334 url = {https://www.sciencedirect.com/science/article/pii/S2666920X22000418},
author = {Thanyaluck Ingkavara and Patcharin Panjaburee and Niwat Srisawasdi and Suthiporn Sajjapanroj}, 335 335 author = {Thanyaluck Ingkavara and Patcharin Panjaburee and Niwat Srisawasdi and Suthiporn Sajjapanroj},
keywords = {Intelligent tutoring system, Personalization, Adaptive learning, E-learning, TAM, Artificial intelligence}, 336 336 keywords = {Intelligent tutoring system, Personalization, Adaptive learning, E-learning, TAM, Artificial intelligence},
abstract = {Nowadays, students are encouraged to learn via online learning systems to promote students' autonomy. Scholars have found that students' self-regulated actions impact their academic success in an online learning environment. However, because traditional online learning systems cannot personalize feedback to the student's personality, most students have less chance to obtain helpful suggestions for enhancing their knowledge linked to their learning problems. This paper incorporated self-regulated online learning in the Physics classroom and used a personalized learning approach to help students receive proper learning paths and material corresponding to their learning preferences. This study conducted a quasi-experimental design using a quantitative approach to evaluate the effectiveness of the proposed learning environment in secondary schools. The experimental group of students participated in self-regulated online learning with a personalized learning approach, while the control group participated in conventional self-regulated online learning. The experimental results showed that the experimental group's post-test and the learning-gain score of the experimental group were significantly higher than those of the control group. Moreover, the results also suggested that the student's perceptions about the usefulness of learning suggestions, ease of use, goal setting, learning environmental structuring, task strategies, time management, self-evaluation, impact on learning, and attitude toward the learning environment are important predictors of behavioral intention to learn with the self-regulated online learning that integrated with the personalized learning approach.} 337 337 abstract = {Nowadays, students are encouraged to learn via online learning systems to promote students' autonomy. Scholars have found that students' self-regulated actions impact their academic success in an online learning environment. However, because traditional online learning systems cannot personalize feedback to the student's personality, most students have less chance to obtain helpful suggestions for enhancing their knowledge linked to their learning problems. This paper incorporated self-regulated online learning in the Physics classroom and used a personalized learning approach to help students receive proper learning paths and material corresponding to their learning preferences. This study conducted a quasi-experimental design using a quantitative approach to evaluate the effectiveness of the proposed learning environment in secondary schools. The experimental group of students participated in self-regulated online learning with a personalized learning approach, while the control group participated in conventional self-regulated online learning. The experimental results showed that the experimental group's post-test and the learning-gain score of the experimental group were significantly higher than those of the control group. Moreover, the results also suggested that the student's perceptions about the usefulness of learning suggestions, ease of use, goal setting, learning environmental structuring, task strategies, time management, self-evaluation, impact on learning, and attitude toward the learning environment are important predictors of behavioral intention to learn with the self-regulated online learning that integrated with the personalized learning approach.}
} 338 338 }
339 339
@article{HUANG2023104684, 340 340 @article{HUANG2023104684,
title = {Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom}, 341 341 title = {Effects of artificial Intelligence–Enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom},
journal = {Computers and Education}, 342 342 journal = {Computers and Education},
volume = {194}, 343 343 volume = {194},
pages = {104684}, 344 344 pages = {104684},
year = {2023}, 345 345 year = {2023},
language = {English}, 346 346 language = {English},
type = {article}, 347 347 type = {article},
issn = {0360-1315}, 348 348 issn = {0360-1315},
doi = {https://doi.org/10.1016/j.compedu.2022.104684}, 349 349 doi = {https://doi.org/10.1016/j.compedu.2022.104684},
url = {https://www.sciencedirect.com/science/article/pii/S036013152200255X}, 350 350 url = {https://www.sciencedirect.com/science/article/pii/S036013152200255X},
author = {Anna Y.Q. Huang and Owen H.T. Lu and Stephen J.H. Yang}, 351 351 author = {Anna Y.Q. Huang and Owen H.T. Lu and Stephen J.H. Yang},
keywords = {Data science applications in education, Distance education and online learning, Improving classroom teaching}, 352 352 keywords = {Data science applications in education, Distance education and online learning, Improving classroom teaching},
abstract = {The flipped classroom approach is aimed at improving learning outcomes by promoting learning motivation and engagement. Recommendation systems can also be used to improve learning outcomes. With the rapid development of artificial intelligence (AI) technology, various systems have been developed to facilitate student learning. Accordingly, we applied AI-enabled personalized video recommendations to stimulate students' learning motivation and engagement during a systems programming course in a flipped classroom setting. We assigned students to control and experimental groups comprising 59 and 43 college students, respectively. The students in both groups received flipped classroom instruction, but only those in the experimental group received AI-enabled personalized video recommendations. We quantitatively measured students’ engagement based on their learning profiles in a learning management system. The results revealed that the AI-enabled personalized video recommendations could significantly improve the learning performance and engagement of students with a moderate motivation level.} 353 353 abstract = {The flipped classroom approach is aimed at improving learning outcomes by promoting learning motivation and engagement. Recommendation systems can also be used to improve learning outcomes. With the rapid development of artificial intelligence (AI) technology, various systems have been developed to facilitate student learning. Accordingly, we applied AI-enabled personalized video recommendations to stimulate students' learning motivation and engagement during a systems programming course in a flipped classroom setting. We assigned students to control and experimental groups comprising 59 and 43 college students, respectively. The students in both groups received flipped classroom instruction, but only those in the experimental group received AI-enabled personalized video recommendations. We quantitatively measured students’ engagement based on their learning profiles in a learning management system. The results revealed that the AI-enabled personalized video recommendations could significantly improve the learning performance and engagement of students with a moderate motivation level.}
} 354 354 }
355 355
@article{ZHAO2023118535, 356 356 @article{ZHAO2023118535,
title = {A recommendation system for effective learning strategies: An integrated approach using context-dependent DEA}, 357 357 title = {A recommendation system for effective learning strategies: An integrated approach using context-dependent DEA},
journal = {Expert Systems with Applications}, 358 358 journal = {Expert Systems with Applications},
volume = {211}, 359 359 volume = {211},
pages = {118535}, 360 360 pages = {118535},
year = {2023}, 361 361 year = {2023},
language = {English}, 362 362 language = {English},
type = {article}, 363 363 type = {article},
issn = {0957-4174}, 364 364 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2022.118535}, 365 365 doi = {https://doi.org/10.1016/j.eswa.2022.118535},
url = {https://www.sciencedirect.com/science/article/pii/S0957417422016104}, 366 366 url = {https://www.sciencedirect.com/science/article/pii/S0957417422016104},
author = {Lu-Tao Zhao and Dai-Song Wang and Feng-Yun Liang and Jian Chen}, 367 367 author = {Lu-Tao Zhao and Dai-Song Wang and Feng-Yun Liang and Jian Chen},
keywords = {Recommendation system, Learning strategies, Context-dependent DEA, Efficiency analysis}, 368 368 keywords = {Recommendation system, Learning strategies, Context-dependent DEA, Efficiency analysis},
abstract = {Universities have been focusing on increasing individualized training and providing appropriate education for students. The individual differences and learning needs of college students should be given enough attention. From the perspective of learning efficiency, we establish a clustering hierarchical progressive improvement model (CHPI), which is based on cluster analysis and context-dependent data envelopment analysis (DEA) methods. The CHPI clusters students' ontological features, employs the context-dependent DEA method to stratify students of different classes, and calculates measures, such as obstacles, to determine the reference path for individuals with inefficient learning processes. The learning strategies are determined according to the gap between the inefficient individual to be improved and the individuals on the reference path. By the study of college English courses as an example, it is found that the CHPI can accurately recommend targeted learning strategies to satisfy the individual needs of college students so that the learning of individuals with inefficient learning processes in a certain stage can be effectively improved. In addition, CHPI can provide specific, efficient suggestions to improve learning efficiency comparing to existing recommendation systems, and has great potential in promoting the integration of education-related researches and expert systems.} 369 369 abstract = {Universities have been focusing on increasing individualized training and providing appropriate education for students. The individual differences and learning needs of college students should be given enough attention. From the perspective of learning efficiency, we establish a clustering hierarchical progressive improvement model (CHPI), which is based on cluster analysis and context-dependent data envelopment analysis (DEA) methods. The CHPI clusters students' ontological features, employs the context-dependent DEA method to stratify students of different classes, and calculates measures, such as obstacles, to determine the reference path for individuals with inefficient learning processes. The learning strategies are determined according to the gap between the inefficient individual to be improved and the individuals on the reference path. By the study of college English courses as an example, it is found that the CHPI can accurately recommend targeted learning strategies to satisfy the individual needs of college students so that the learning of individuals with inefficient learning processes in a certain stage can be effectively improved. In addition, CHPI can provide specific, efficient suggestions to improve learning efficiency comparing to existing recommendation systems, and has great potential in promoting the integration of education-related researches and expert systems.}
} 370 370 }
371 371
@article{SU2022109547, 372 372 @article{SU2022109547,
title = {Graph-based cognitive diagnosis for intelligent tutoring systems}, 373 373 title = {Graph-based cognitive diagnosis for intelligent tutoring systems},
journal = {Knowledge-Based Systems}, 374 374 journal = {Knowledge-Based Systems},
volume = {253}, 375 375 volume = {253},
pages = {109547}, 376 376 pages = {109547},
year = {2022}, 377 377 year = {2022},
language = {English}, 378 378 language = {English},
type = {article}, 379 379 type = {article},
issn = {0950-7051}, 380 380 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2022.109547}, 381 381 doi = {https://doi.org/10.1016/j.knosys.2022.109547},
url = {https://www.sciencedirect.com/science/article/pii/S095070512200778X}, 382 382 url = {https://www.sciencedirect.com/science/article/pii/S095070512200778X},
author = {Yu Su and Zeyu Cheng and Jinze Wu and Yanmin Dong and Zhenya Huang and Le Wu and Enhong Chen and Shijin Wang and Fei Xie}, 383 383 author = {Yu Su and Zeyu Cheng and Jinze Wu and Yanmin Dong and Zhenya Huang and Le Wu and Enhong Chen and Shijin Wang and Fei Xie},
keywords = {Cognitive diagnosis, Graph neural networks, Interpretable machine learning}, 384 384 keywords = {Cognitive diagnosis, Graph neural networks, Interpretable machine learning},
abstract = {For intelligent tutoring systems, Cognitive Diagnosis (CD) is a fundamental task that aims to estimate the mastery degree of a student on each skill according to the exercise record. The CD task is considered rather challenging since we need to model inner-relations and inter-relations among students, skills, and questions to obtain more abundant information. Most existing methods attempt to solve this problem through two-way interactions between students and questions (or between students and skills), ignoring potential high-order relations among entities. Furthermore, how to construct an end-to-end framework that can model the complex interactions among different types of entities at the same time remains unexplored. Therefore, in this paper, we propose a graph-based Cognitive Diagnosis model (GCDM) that directly discovers the interactions among students, skills, and questions through a heterogeneous cognitive graph. Specifically, we design two graph-based layers: a performance-relative propagator and an attentive knowledge aggregator. The former is applied to propagate a student’s cognitive state through different types of graph edges, while the latter selectively gathers messages from neighboring graph nodes. Extensive experimental results on two real-world datasets clearly show the effectiveness and extendibility of our proposed model.} 385 385 abstract = {For intelligent tutoring systems, Cognitive Diagnosis (CD) is a fundamental task that aims to estimate the mastery degree of a student on each skill according to the exercise record. The CD task is considered rather challenging since we need to model inner-relations and inter-relations among students, skills, and questions to obtain more abundant information. Most existing methods attempt to solve this problem through two-way interactions between students and questions (or between students and skills), ignoring potential high-order relations among entities. Furthermore, how to construct an end-to-end framework that can model the complex interactions among different types of entities at the same time remains unexplored. Therefore, in this paper, we propose a graph-based Cognitive Diagnosis model (GCDM) that directly discovers the interactions among students, skills, and questions through a heterogeneous cognitive graph. Specifically, we design two graph-based layers: a performance-relative propagator and an attentive knowledge aggregator. The former is applied to propagate a student’s cognitive state through different types of graph edges, while the latter selectively gathers messages from neighboring graph nodes. Extensive experimental results on two real-world datasets clearly show the effectiveness and extendibility of our proposed model.}
} 386 386 }
387 387
@article{EZALDEEN2022100700, 388 388 @article{EZALDEEN2022100700,
title = {A hybrid E-learning recommendation integrating adaptive profiling and sentiment analysis}, 389 389 title = {A hybrid E-learning recommendation integrating adaptive profiling and sentiment analysis},
journal = {Journal of Web Semantics}, 390 390 journal = {Journal of Web Semantics},
volume = {72}, 391 391 volume = {72},
pages = {100700}, 392 392 pages = {100700},
year = {2022}, 393 393 year = {2022},
type = {article}, 394 394 type = {article},
language = {English}, 395 395 language = {English},
issn = {1570-8268}, 396 396 issn = {1570-8268},
doi = {https://doi.org/10.1016/j.websem.2021.100700}, 397 397 doi = {https://doi.org/10.1016/j.websem.2021.100700},
url = {https://www.sciencedirect.com/science/article/pii/S1570826821000664}, 398 398 url = {https://www.sciencedirect.com/science/article/pii/S1570826821000664},
author = {Hadi Ezaldeen and Rachita Misra and Sukant Kishoro Bisoy and Rawaa Alatrash and Rojalina Priyadarshini}, 399 399 author = {Hadi Ezaldeen and Rachita Misra and Sukant Kishoro Bisoy and Rawaa Alatrash and Rojalina Priyadarshini},
keywords = {Hybrid E-learning recommendation, Adaptive profiling, Semantic learner profile, Fine-grained sentiment analysis, Convolutional Neural Network, Word embeddings}, 400 400 keywords = {Hybrid E-learning recommendation, Adaptive profiling, Semantic learner profile, Fine-grained sentiment analysis, Convolutional Neural Network, Word embeddings},
abstract = {This research proposes a novel framework named Enhanced e-Learning Hybrid Recommender System (ELHRS) that provides an appropriate e-content with the highest predicted ratings corresponding to the learner’s particular needs. To accomplish this, a new model is developed to deduce the Semantic Learner Profile automatically. It adaptively associates the learning patterns and rules depending on the learner’s behavior and the semantic relations computed in the semantic matrix that mutually links e-learning materials and terms. Here, a semantic-based approach for term expansion is introduced using DBpedia and WordNet ontologies. Further, various sentiment analysis models are proposed and incorporated as a part of the recommender system to predict ratings of e-learning resources from posted text reviews utilizing fine-grained sentiment classification on five discrete classes. Qualitative Natural Language Processing (NLP) methods with tailored-made Convolutional Neural Network (CNN) are developed and evaluated on our customized dataset collected for a specific domain and a public dataset. Two improved language models are introduced depending on Skip-Gram (S-G) and Continuous Bag of Words (CBOW) techniques. In addition, a robust language model based on hybridization of these couple of methods is developed to derive better vocabulary representation, yielding better accuracy 89.1% for the CNN-Three-Channel-Concatenation model. The suggested recommendation methodology depends on the learner’s preferences, other similar learners’ experience and background, deriving their opinions from the reviews towards the best learning resources. This assists the learners in finding the desired e-content at the proper time.} 401 401 abstract = {This research proposes a novel framework named Enhanced e-Learning Hybrid Recommender System (ELHRS) that provides an appropriate e-content with the highest predicted ratings corresponding to the learner’s particular needs. To accomplish this, a new model is developed to deduce the Semantic Learner Profile automatically. It adaptively associates the learning patterns and rules depending on the learner’s behavior and the semantic relations computed in the semantic matrix that mutually links e-learning materials and terms. Here, a semantic-based approach for term expansion is introduced using DBpedia and WordNet ontologies. Further, various sentiment analysis models are proposed and incorporated as a part of the recommender system to predict ratings of e-learning resources from posted text reviews utilizing fine-grained sentiment classification on five discrete classes. Qualitative Natural Language Processing (NLP) methods with tailored-made Convolutional Neural Network (CNN) are developed and evaluated on our customized dataset collected for a specific domain and a public dataset. Two improved language models are introduced depending on Skip-Gram (S-G) and Continuous Bag of Words (CBOW) techniques. In addition, a robust language model based on hybridization of these couple of methods is developed to derive better vocabulary representation, yielding better accuracy 89.1% for the CNN-Three-Channel-Concatenation model. The suggested recommendation methodology depends on the learner’s preferences, other similar learners’ experience and background, deriving their opinions from the reviews towards the best learning resources. This assists the learners in finding the desired e-content at the proper time.}
} 402 402 }
403 403
@article{MUANGPRATHUB2020e05227, 404 404 @article{MUANGPRATHUB2020e05227,
title = {Learning recommendation with formal concept analysis for intelligent tutoring system}, 405 405 title = {Learning recommendation with formal concept analysis for intelligent tutoring system},
journal = {Heliyon}, 406 406 journal = {Heliyon},
volume = {6}, 407 407 volume = {6},
number = {10}, 408 408 number = {10},
pages = {e05227}, 409 409 pages = {e05227},
language = {English}, 410 410 language = {English},
type = {article}, 411 411 type = {article},
year = {2020}, 412 412 year = {2020},
issn = {2405-8440}, 413 413 issn = {2405-8440},
doi = {https://doi.org/10.1016/j.heliyon.2020.e05227}, 414 414 doi = {https://doi.org/10.1016/j.heliyon.2020.e05227},
url = {https://www.sciencedirect.com/science/article/pii/S2405844020320703}, 415 415 url = {https://www.sciencedirect.com/science/article/pii/S2405844020320703},
author = {Jirapond Muangprathub and Veera Boonjing and Kosin Chamnongthai}, 416 416 author = {Jirapond Muangprathub and Veera Boonjing and Kosin Chamnongthai},
keywords = {Computer Science, Learning recommendation, Formal concept analysis, Intelligent tutoring system, Adaptive learning}, 417 417 keywords = {Computer Science, Learning recommendation, Formal concept analysis, Intelligent tutoring system, Adaptive learning},
abstract = {The aim of this research was to develop a learning recommendation component in an intelligent tutoring system (ITS) that dynamically predicts and adapts to a learner's style. In order to develop a proper ITS, we present an improved knowledge base supporting adaptive learning, which can be achieved by a suitable knowledge construction. This process is illustrated by implementing a web-based online tutor system. In addition, our knowledge structure provides adaptive presentation and personalized learning with the proposed adaptive algorithm, to retrieve content according to individual learner characteristics. To demonstrate the proposed adaptive algorithm, pre-test and post-test were used to evaluate suggestion accuracy of the course in a class for adapting to a learner's style. In addition, pre- and post-testing were also used with students in a real teaching/learning environment to evaluate the performance of the proposed model. The results show that the proposed system can be used to help students or learners achieve improved learning.} 418 418 abstract = {The aim of this research was to develop a learning recommendation component in an intelligent tutoring system (ITS) that dynamically predicts and adapts to a learner's style. In order to develop a proper ITS, we present an improved knowledge base supporting adaptive learning, which can be achieved by a suitable knowledge construction. This process is illustrated by implementing a web-based online tutor system. In addition, our knowledge structure provides adaptive presentation and personalized learning with the proposed adaptive algorithm, to retrieve content according to individual learner characteristics. To demonstrate the proposed adaptive algorithm, pre-test and post-test were used to evaluate suggestion accuracy of the course in a class for adapting to a learner's style. In addition, pre- and post-testing were also used with students in a real teaching/learning environment to evaluate the performance of the proposed model. The results show that the proposed system can be used to help students or learners achieve improved learning.}
} 419 419 }
420 420
@article{min8100434, 421 421 @article{min8100434,
author = {Leikola, Maria and Sauer, Christian and Rintala, Lotta and Aromaa, Jari and Lundström, Mari}, 422 422 author = {Leikola, Maria and Sauer, Christian and Rintala, Lotta and Aromaa, Jari and Lundström, Mari},
title = {Assessing the Similarity of Cyanide-Free Gold Leaching Processes: A Case-Based Reasoning Application}, 423 423 title = {Assessing the Similarity of Cyanide-Free Gold Leaching Processes: A Case-Based Reasoning Application},
journal = {Minerals}, 424 424 journal = {Minerals},
volume = {8}, 425 425 volume = {8},
type = {article}, 426 426 type = {article},
language = {English}, 427 427 language = {English},
year = {2018}, 428 428 year = {2018},
number = {10}, 429 429 number = {10},
url = {https://www.mdpi.com/2075-163X/8/10/434}, 430 430 url = {https://www.mdpi.com/2075-163X/8/10/434},
issn = {2075-163X}, 431 431 issn = {2075-163X},
keywords={hydrometallurgy, cyanide-free gold, knowledge modelling, case-based reasoning, information retrieval}, 432 432 keywords={hydrometallurgy, cyanide-free gold, knowledge modelling, case-based reasoning, information retrieval},
abstract = {Hydrometallurgical researchers, and other professionals alike, invest significant amounts of time reading scientific articles, technical notes, and other scientific documents, while looking for the most relevant information for their particular research interest. In an attempt to save the researcher&rsquo;s time, this study presents an information retrieval tool using case-based reasoning. The tool was built for comparing scientific articles concerning cyanide-free leaching of gold ores/concentrates/tailings. Altogether, 50 cases of experiments were gathered in a case base. 15 different attributes related to the treatment of the raw material and the leaching conditions were selected to compare the cases. The attributes were as follows: Pretreatment, Overall method, Complexant source, Oxidant source, Complexant concentration, Oxidant concentration, Temperature, pH, Redox-potential, Pressure, Materials of construction, Extraction, Extraction rate, Reagent consumption, and Solid-liquid ratio. The resulting retrieval tool (LeachSim) was able to rank the scientific articles according to their similarity with the user&rsquo;s research interest. Such a tool could eventually aid the user in finding the most relevant information, but not replace thorough understanding and human expertise.}, 433 433 abstract = {Hydrometallurgical researchers, and other professionals alike, invest significant amounts of time reading scientific articles, technical notes, and other scientific documents, while looking for the most relevant information for their particular research interest. In an attempt to save the researcher&rsquo;s time, this study presents an information retrieval tool using case-based reasoning. The tool was built for comparing scientific articles concerning cyanide-free leaching of gold ores/concentrates/tailings. Altogether, 50 cases of experiments were gathered in a case base. 15 different attributes related to the treatment of the raw material and the leaching conditions were selected to compare the cases. The attributes were as follows: Pretreatment, Overall method, Complexant source, Oxidant source, Complexant concentration, Oxidant concentration, Temperature, pH, Redox-potential, Pressure, Materials of construction, Extraction, Extraction rate, Reagent consumption, and Solid-liquid ratio. The resulting retrieval tool (LeachSim) was able to rank the scientific articles according to their similarity with the user&rsquo;s research interest. Such a tool could eventually aid the user in finding the most relevant information, but not replace thorough understanding and human expertise.},
doi = {10.3390/min8100434} 434 434 doi = {10.3390/min8100434}
} 435 435 }
436 436
@article{10.1145/3459665, 437 437 @article{10.1145/3459665,
author = {Cunningham, P\'{a}draig and Delany, Sarah Jane}, 438 438 author = {Cunningham, P\'{a}draig and Delany, Sarah Jane},
title = {K-Nearest Neighbour Classifiers - A Tutorial}, 439 439 title = {K-Nearest Neighbour Classifiers - A Tutorial},
year = {2021}, 440 440 year = {2021},
issue_date = {July 2022}, 441 441 issue_date = {July 2022},
publisher = {Association for Computing Machinery}, 442 442 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 443 443 address = {New York, NY, USA},
type={article}, 444 444 type={article},
language={English}, 445 445 language={English},
volume = {54}, 446 446 volume = {54},
number = {6}, 447 447 number = {6},
issn = {0360-0300}, 448 448 issn = {0360-0300},
url = {https://doi.org/10.1145/3459665}, 449 449 url = {https://doi.org/10.1145/3459665},
doi = {10.1145/3459665}, 450 450 doi = {10.1145/3459665},
abstract = {Perhaps the most straightforward classifier in the arsenal or Machine Learning techniques is the Nearest Neighbour Classifier—classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance, because issues of poor runtime performance is not such a problem these days with the computational power that is available. This article presents an overview of techniques for Nearest Neighbour classification focusing on: mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours, and mechanisms for reducing the dimension of the data.This article is the second edition of a paper previously published as a technical report [16]. Sections on similarity measures for time-series, retrieval speedup, and intrinsic dimensionality have been added. An Appendix is included, providing access to Python code for the key methods.}, 451 451 abstract = {Perhaps the most straightforward classifier in the arsenal or Machine Learning techniques is the Nearest Neighbour Classifier—classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance, because issues of poor runtime performance is not such a problem these days with the computational power that is available. This article presents an overview of techniques for Nearest Neighbour classification focusing on: mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours, and mechanisms for reducing the dimension of the data.This article is the second edition of a paper previously published as a technical report [16]. Sections on similarity measures for time-series, retrieval speedup, and intrinsic dimensionality have been added. An Appendix is included, providing access to Python code for the key methods.},
journal = {ACM Comput. Surv.}, 452 452 journal = {ACM Comput. Surv.},
month = {jul}, 453 453 month = {jul},
articleno = {128}, 454 454 articleno = {128},
numpages = {25}, 455 455 numpages = {25},
keywords = {k-Nearest neighbour classifiers} 456 456 keywords = {k-Nearest neighbour classifiers}
} 457 457 }
458 458
@article{9072123, 459 459 @article{9072123,
author={Sinaga, Kristina P. and Yang, Miin-Shen}, 460 460 author={Sinaga, Kristina P. and Yang, Miin-Shen},
journal={IEEE Access}, 461 461 journal={IEEE Access},
type={article}, 462 462 type={article},
language={English}, 463 463 language={English},
title={Unsupervised K-Means Clustering Algorithm}, 464 464 title={Unsupervised K-Means Clustering Algorithm},
year={2020}, 465 465 year={2020},
volume={8}, 466 466 volume={8},
number={}, 467 467 number={},
pages={80716-80727}, 468 468 pages={80716-80727},
doi={10.1109/ACCESS.2020.2988796} 469 469 doi={10.1109/ACCESS.2020.2988796}
} 470 470 }
471 471
@article{WANG2021331, 472 472 @article{WANG2021331,
title = {A new prediction strategy for dynamic multi-objective optimization using Gaussian Mixture Model}, 473 473 title = {A new prediction strategy for dynamic multi-objective optimization using Gaussian Mixture Model},
journal = {Information Sciences}, 474 474 journal = {Information Sciences},
volume = {580}, 475 475 volume = {580},
type = {article}, 476 476 type = {article},
language = {English}, 477 477 language = {English},
pages = {331-351}, 478 478 pages = {331-351},
year = {2021}, 479 479 year = {2021},
issn = {0020-0255}, 480 480 issn = {0020-0255},
doi = {https://doi.org/10.1016/j.ins.2021.08.065}, 481 481 doi = {https://doi.org/10.1016/j.ins.2021.08.065},
url = {https://www.sciencedirect.com/science/article/pii/S0020025521008732}, 482 482 url = {https://www.sciencedirect.com/science/article/pii/S0020025521008732},
author = {Feng Wang and Fanshu Liao and Yixuan Li and Hui Wang}, 483 483 author = {Feng Wang and Fanshu Liao and Yixuan Li and Hui Wang},
keywords = {Dynamic multi-objective optimization, Gaussian Mixture Model, Change type detection, Resampling}, 484 484 keywords = {Dynamic multi-objective optimization, Gaussian Mixture Model, Change type detection, Resampling},
abstract = {Dynamic multi-objective optimization problems (DMOPs), in which the environments change over time, have attracted many researchers’ attention in recent years. Since the Pareto set (PS) or the Pareto front (PF) can change over time, how to track the movement of the PS or PF is a challenging problem in DMOPs. Over the past few years, lots of methods have been proposed, and the prediction based strategy has been considered the most effective way to track the new PS. However, the performance of most existing prediction strategies depends greatly on the quantity and quality of the historical information and will deteriorate due to non-linear changes, leading to poor results. In this paper, we propose a new prediction method, named MOEA/D-GMM, which incorporates the Gaussian Mixture Model (GMM) into the MOEA/D framework for the prediction of the new PS when changes occur. Since GMM is a powerful non-linear model to accurately fit various data distributions, it can effectively generate solutions with better quality according to the distributions. In the proposed algorithm, a change type detection strategy is first designed to estimate an approximate PS according to different change types. Then, GMM is employed to make a more accurate prediction by training it with the approximate PS. To overcome the shortcoming of a lack of training solutions for GMM, the Empirical Cumulative Distribution Function (ECDF) method is used to resample more training solutions before GMM training. Experimental results on various benchmark test problems and a classical real-world problem show that, compared with some state-of-the-art dynamic optimization algorithms, MOEA/D-GMM outperforms others in most cases.} 485 485 abstract = {Dynamic multi-objective optimization problems (DMOPs), in which the environments change over time, have attracted many researchers’ attention in recent years. Since the Pareto set (PS) or the Pareto front (PF) can change over time, how to track the movement of the PS or PF is a challenging problem in DMOPs. Over the past few years, lots of methods have been proposed, and the prediction based strategy has been considered the most effective way to track the new PS. However, the performance of most existing prediction strategies depends greatly on the quantity and quality of the historical information and will deteriorate due to non-linear changes, leading to poor results. In this paper, we propose a new prediction method, named MOEA/D-GMM, which incorporates the Gaussian Mixture Model (GMM) into the MOEA/D framework for the prediction of the new PS when changes occur. Since GMM is a powerful non-linear model to accurately fit various data distributions, it can effectively generate solutions with better quality according to the distributions. In the proposed algorithm, a change type detection strategy is first designed to estimate an approximate PS according to different change types. Then, GMM is employed to make a more accurate prediction by training it with the approximate PS. To overcome the shortcoming of a lack of training solutions for GMM, the Empirical Cumulative Distribution Function (ECDF) method is used to resample more training solutions before GMM training. Experimental results on various benchmark test problems and a classical real-world problem show that, compared with some state-of-the-art dynamic optimization algorithms, MOEA/D-GMM outperforms others in most cases.}
} 486 486 }
487 487
@article{9627973, 488 488 @article{9627973,
author={Xu, Shengbing and Cai, Wei and Xia, Hongxi and Liu, Bo and Xu, Jie}, 489 489 author={Xu, Shengbing and Cai, Wei and Xia, Hongxi and Liu, Bo and Xu, Jie},
journal={IEEE Access}, 490 490 journal={IEEE Access},
title={Dynamic Metric Accelerated Method for Fuzzy Clustering}, 491 491 title={Dynamic Metric Accelerated Method for Fuzzy Clustering},
year={2021}, 492 492 year={2021},
type={article}, 493 493 type={article},
language={English}, 494 494 language={English},
volume={9}, 495 495 volume={9},
number={}, 496 496 number={},
pages={166838-166854}, 497 497 pages={166838-166854},
doi={10.1109/ACCESS.2021.3131368} 498 498 doi={10.1109/ACCESS.2021.3131368}
} 499 499 }
500 500
@article{9434422, 501 501 @article{9434422,
author={Gupta, Samarth and Chaudhari, Shreyas and Joshi, Gauri and Yağan, Osman}, 502 502 author={Gupta, Samarth and Chaudhari, Shreyas and Joshi, Gauri and Yağan, Osman},
journal={IEEE Transactions on Information Theory}, 503 503 journal={IEEE Transactions on Information Theory},
title={Multi-Armed Bandits With Correlated Arms}, 504 504 title={Multi-Armed Bandits With Correlated Arms},
year={2021}, 505 505 year={2021},
language={English}, 506 506 language={English},
type={article}, 507 507 type={article},
volume={67}, 508 508 volume={67},
number={10}, 509 509 number={10},
pages={6711-6732}, 510 510 pages={6711-6732},
doi={10.1109/TIT.2021.3081508} 511 511 doi={10.1109/TIT.2021.3081508}
} 512 512 }
513 513
@Inproceedings{8495930, 514 514 @Inproceedings{8495930,
author={Supic, H.}, 515 515 author={Supic, H.},
booktitle={2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)}, 516 516 booktitle={2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)},
title={Case-Based Reasoning Model for Personalized Learning Path Recommendation in Example-Based Learning Activities}, 517 517 title={Case-Based Reasoning Model for Personalized Learning Path Recommendation in Example-Based Learning Activities},
year={2018}, 518 518 year={2018},
type={article}, 519 519 type={article},
language={English}, 520 520 language={English},
volume={}, 521 521 volume={},
number={}, 522 522 number={},
pages={175-178}, 523 523 pages={175-178},
doi={10.1109/WETICE.2018.00040} 524 524 doi={10.1109/WETICE.2018.00040}
} 525 525 }
526 526
@Inproceedings{9870279, 527 527 @Inproceedings{9870279,
author={Lin, Baihan}, 528 528 author={Lin, Baihan},
booktitle={2022 IEEE Congress on Evolutionary Computation (CEC)}, 529 529 booktitle={2022 IEEE Congress on Evolutionary Computation (CEC)},
title={Evolutionary Multi-Armed Bandits with Genetic Thompson Sampling}, 530 530 title={Evolutionary Multi-Armed Bandits with Genetic Thompson Sampling},
year={2022}, 531 531 year={2022},
type={article}, 532 532 type={article},
language={English}, 533 533 language={English},
volume={}, 534 534 volume={},
number={}, 535 535 number={},
pages={1-8}, 536 536 pages={1-8},
doi={10.1109/CEC55065.2022.9870279} 537 537 doi={10.1109/CEC55065.2022.9870279}
} 538 538 }
539 539
@article{Obeid, 540 540 @article{Obeid,
author={Obeid, C. and Lahoud, C. and Khoury, H. E. and Champin, P.}, 541 541 author={Obeid, C. and Lahoud, C. and Khoury, H. E. and Champin, P.},
title={A Novel Hybrid Recommender System Approach for Student Academic Advising Named COHRS, Supported by Case-based Reasoning and Ontology}, 542 542 title={A Novel Hybrid Recommender System Approach for Student Academic Advising Named COHRS, Supported by Case-based Reasoning and Ontology},
journal={Computer Science and Information Systems}, 543 543 journal={Computer Science and Information Systems},
type={article}, 544 544 type={article},
language={English}, 545 545 language={English},
volume={19}, 546 546 volume={19},
number={2}, 547 547 number={2},
pages={979–1005}, 548 548 pages={979–1005},
year={2022}, 549 549 year={2022},
doi={https://doi.org/10.2298/CSIS220215011O} 550 550 doi={https://doi.org/10.2298/CSIS220215011O}
} 551 551 }
552 552
@book{Nkambou, 553 553 @book{Nkambou,
author = {Nkambou, R. and Bourdeau, J. and Mizoguchi, R.}, 554 554 author = {Nkambou, R. and Bourdeau, J. and Mizoguchi, R.},
title = {Advances in Intelligent Tutoring Systems}, 555 555 title = {Advances in Intelligent Tutoring Systems},
year = {2010}, 556 556 year = {2010},
type = {article}, 557 557 type = {article},
language = {English}, 558 558 language = {English},
publisher = {Springer Berlin, Heidelberg}, 559 559 publisher = {Springer Berlin, Heidelberg},
edition = {1} 560 560 edition = {1}
} 561 561 }
562 562
@book{hajduk2019cognitive, 563 563 @book{hajduk2019cognitive,
title={Cognitive Multi-agent Systems: Structures, Strategies and Applications to Mobile Robotics and Robosoccer}, 564 564 title={Cognitive Multi-agent Systems: Structures, Strategies and Applications to Mobile Robotics and Robosoccer},
author={Hajduk, M. and Sukop, M. and Haun, M.}, 565 565 author={Hajduk, M. and Sukop, M. and Haun, M.},
type={book}, 566 566 type={book},
language={English}, 567 567 language={English},
isbn={9783319936857}, 568 568 isbn={9783319936857},
series={Studies in Systems, Decision and Control}, 569 569 series={Studies in Systems, Decision and Control},
year={2019}, 570 570 year={2019},
publisher={Springer International Publishing} 571 571 publisher={Springer International Publishing}
} 572 572 }
573 573
@article{RICHTER20093, 574 574 @article{RICHTER20093,
title = {The search for knowledge, contexts, and Case-Based Reasoning}, 575 575 title = {The search for knowledge, contexts, and Case-Based Reasoning},
journal = {Engineering Applications of Artificial Intelligence}, 576 576 journal = {Engineering Applications of Artificial Intelligence},
language = {English}, 577 577 language = {English},
type = {article}, 578 578 type = {article},
volume = {22}, 579 579 volume = {22},
number = {1}, 580 580 number = {1},
pages = {3-9}, 581 581 pages = {3-9},
year = {2009}, 582 582 year = {2009},
issn = {0952-1976}, 583 583 issn = {0952-1976},
doi = {https://doi.org/10.1016/j.engappai.2008.04.021}, 584 584 doi = {https://doi.org/10.1016/j.engappai.2008.04.021},
url = {https://www.sciencedirect.com/science/article/pii/S095219760800078X}, 585 585 url = {https://www.sciencedirect.com/science/article/pii/S095219760800078X},
author = {Michael M. Richter}, 586 586 author = {Michael M. Richter},
keywords = {Case-Based Reasoning, Knowledge, Processes, Utility, Context}, 587 587 keywords = {Case-Based Reasoning, Knowledge, Processes, Utility, Context},
abstract = {A major goal of this paper is to compare Case-Based Reasoning with other methods searching for knowledge. We consider knowledge as a resource that can be traded. It has no value in itself; the value is measured by the usefulness of applying it in some process. Such a process has info-needs that have to be satisfied. The concept to measure this is the economical term utility. In general, utility depends on the user and its context, i.e., it is subjective. Here, we introduce levels of contexts from general to individual. We illustrate that Case-Based Reasoning on the lower, i.e., more personal levels CBR is quite useful, in particular in comparison with traditional informational retrieval methods.} 588 588 abstract = {A major goal of this paper is to compare Case-Based Reasoning with other methods searching for knowledge. We consider knowledge as a resource that can be traded. It has no value in itself; the value is measured by the usefulness of applying it in some process. Such a process has info-needs that have to be satisfied. The concept to measure this is the economical term utility. In general, utility depends on the user and its context, i.e., it is subjective. Here, we introduce levels of contexts from general to individual. We illustrate that Case-Based Reasoning on the lower, i.e., more personal levels CBR is quite useful, in particular in comparison with traditional informational retrieval methods.}
} 589 589 }
590 590
@Thesis{Marie, 591 591 @Thesis{Marie,
author={Marie, F.}, 592 592 author={Marie, F.},
title={COLISEUM-3D. Une plate-forme innovante pour la segmentation d’images médicales par Raisonnement à Partir de Cas (RàPC) et méthodes d’apprentissage de type Deep Learning}, 593 593 title={COLISEUM-3D. Une plate-forme innovante pour la segmentation d’images médicales par Raisonnement à Partir de Cas (RàPC) et méthodes d’apprentissage de type Deep Learning},
type={diplomathesis}, 594 594 type={diplomathesis},
language={French}, 595 595 language={French},
institution={Université de Franche-Comte}, 596 596 institution={Université de Franche-Comte},
year={2019} 597 597 year={2019}
} 598 598 }
599 599
@book{Hoang, 600 600 @book{Hoang,
title = {La formule du savoir. Une philosophie unifiée du savoir fondée sur le théorème de Bayes}, 601 601 title = {La formule du savoir. Une philosophie unifiée du savoir fondée sur le théorème de Bayes},
author = {Hoang, L.N.}, 602 602 author = {Hoang, L.N.},
type = {book}, 603 603 type = {book},
language = {French}, 604 604 language = {French},
isbn = {9782759822607}, 605 605 isbn = {9782759822607},
year = {2018}, 606 606 year = {2018},
publisher = {EDP Sciences} 607 607 publisher = {EDP Sciences}
} 608 608 }
609 609
@book{Richter2013, 610 610 @book{Richter2013,
title={Case-Based Reasoning (A Textbook)}, 611 611 title={Case-Based Reasoning (A Textbook)},
author={Richter, M. and Weber, R.}, 612 612 author={Richter, M. and Weber, R.},
type={book}, 613 613 type={book},
language={English}, 614 614 language={English},
isbn={9783642401664}, 615 615 isbn={9783642401664},
year={2013}, 616 616 year={2013},
publisher={Springer-Verlag GmbH} 617 617 publisher={Springer-Verlag GmbH}
} 618 618 }
619 619
@book{kedia2020hands, 620 620 @book{kedia2020hands,
title={Hands-On Python Natural Language Processing: Explore tools and techniques to analyze and process text with a view to building real-world NLP applications}, 621 621 title={Hands-On Python Natural Language Processing: Explore tools and techniques to analyze and process text with a view to building real-world NLP applications},
author={Kedia, A. and Rasu, M.}, 622 622 author={Kedia, A. and Rasu, M.},
language={English}, 623 623 language={English},
type={book}, 624 624 type={book},
isbn={9781838982584}, 625 625 isbn={9781838982584},
url={https://books.google.fr/books?id=1AbuDwAAQBAJ}, 626 626 url={https://books.google.fr/books?id=1AbuDwAAQBAJ},
year={2020}, 627 627 year={2020},
publisher={Packt Publishing} 628 628 publisher={Packt Publishing}
} 629 629 }
630 630
@book{ghosh2019natural, 631 631 @book{ghosh2019natural,
title={Natural Language Processing Fundamentals: Build intelligent applications that can interpret the human language to deliver impactful results}, 632 632 title={Natural Language Processing Fundamentals: Build intelligent applications that can interpret the human language to deliver impactful results},
author={Ghosh, S. and Gunning, D.}, 633 633 author={Ghosh, S. and Gunning, D.},
language={English}, 634 634 language={English},
type={book}, 635 635 type={book},
isbn={9781789955989}, 636 636 isbn={9781789955989},
url={https://books.google.fr/books?id=i8-PDwAAQBAJ}, 637 637 url={https://books.google.fr/books?id=i8-PDwAAQBAJ},
year={2019}, 638 638 year={2019},
publisher={Packt Publishing} 639 639 publisher={Packt Publishing}
} 640 640 }
641 641
@article{Akerblom, 642 642 @article{Akerblom,
title={Online learning of network bottlenecks via minimax paths}, 643 643 title={Online learning of network bottlenecks via minimax paths},
author={kerblom, Niklas and Hoseini, Fazeleh Sadat and Haghir Chehreghani, Morteza}, 644 644 author={kerblom, Niklas and Hoseini, Fazeleh Sadat and Haghir Chehreghani, Morteza},
language={English}, 645 645 language={English},
type={article}, 646 646 type={article},
volume = {122}, 647 647 volume = {122},
year = {2023}, 648 648 year = {2023},
issn = {1573-0565}, 649 649 issn = {1573-0565},
doi = {https://doi.org/10.1007/s10994-022-06270-0}, 650 650 doi = {https://doi.org/10.1007/s10994-022-06270-0},
url = {https://doi.org/10.1007/s10994-022-06270-0}, 651 651 url = {https://doi.org/10.1007/s10994-022-06270-0},
abstract={In this paper, we study bottleneck identification in networks via extracting minimax paths. Many real-world networks have stochastic weights for which full knowledge is not available in advance. Therefore, we model this task as a combinatorial semi-bandit problem to which we apply a combinatorial version of Thompson Sampling and establish an upper bound on the corresponding Bayesian regret. Due to the computational intractability of the problem, we then devise an alternative problem formulation which approximates the original objective. Finally, we experimentally evaluate the performance of Thompson Sampling with the approximate formulation on real-world directed and undirected networks.} 652 652 abstract={In this paper, we study bottleneck identification in networks via extracting minimax paths. Many real-world networks have stochastic weights for which full knowledge is not available in advance. Therefore, we model this task as a combinatorial semi-bandit problem to which we apply a combinatorial version of Thompson Sampling and establish an upper bound on the corresponding Bayesian regret. Due to the computational intractability of the problem, we then devise an alternative problem formulation which approximates the original objective. Finally, we experimentally evaluate the performance of Thompson Sampling with the approximate formulation on real-world directed and undirected networks.}
} 653 653 }
654 654
@article{Simen, 655 655 @article{Simen,
title={Dynamic slate recommendation with gated recurrent units and Thompson sampling}, 656 656 title={Dynamic slate recommendation with gated recurrent units and Thompson sampling},
author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo}, 657 657 author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo},
language={English}, 658 658 language={English},
type={article}, 659 659 type={article},
volume = {36}, 660 660 volume = {36},
year = {2022}, 661 661 year = {2022},
issn = {1573-756X}, 662 662 issn = {1573-756X},
doi = {https://doi.org/10.1007/s10618-022-00849-w}, 663 663 doi = {https://doi.org/10.1007/s10618-022-00849-w},
url = {https://doi.org/10.1007/s10618-022-00849-w}, 664 664 url = {https://doi.org/10.1007/s10618-022-00849-w},
abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.} 665 665 abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.}
} 666 666 }
667 667
@Inproceedings{Arthurs, 668 668 @Inproceedings{Arthurs,
author={Arthurs, Noah and Stenhaug, Ben and Karayev, Sergey and Piech, Chris}, 669 669 author={Arthurs, Noah and Stenhaug, Ben and Karayev, Sergey and Piech, Chris},
booktitle={International Conference on Educational Data Mining (EDM)}, 670 670 booktitle={International Conference on Educational Data Mining (EDM)},
title={Grades Are Not Normal: Improving Exam Score Models Using the Logit-Normal Distribution}, 671 671 title={Grades Are Not Normal: Improving Exam Score Models Using the Logit-Normal Distribution},
year={2019}, 672 672 year={2019},
type={article}, 673 673 type={article},
language={English}, 674 674 language={English},
volume={}, 675 675 volume={},
number={}, 676 676 number={},
pages={6}, 677 677 pages={6},
url={https://eric.ed.gov/?id=ED599204} 678 678 url={https://eric.ed.gov/?id=ED599204}
} 679 679 }
680 680
@article{Bahramian, 681 681 @article{Bahramian,
title={A Cold Start Context-Aware Recommender System for Tour Planning Using Artificial Neural Network and Case Based Reasoning}, 682 682 title={A Cold Start Context-Aware Recommender System for Tour Planning Using Artificial Neural Network and Case Based Reasoning},
author={Bahramian, Zahra and Ali Abbaspour, Rahim and Claramunt, Christophe}, 683 683 author={Bahramian, Zahra and Ali Abbaspour, Rahim and Claramunt, Christophe},
language={English}, 684 684 language={English},
type={article}, 685 685 type={article},
year = {2017}, 686 686 year = {2017},
issn = {1574-017X}, 687 687 issn = {1574-017X},
doi = {https://doi.org/10.1155/2017/9364903}, 688 688 doi = {https://doi.org/10.1155/2017/9364903},
url = {https://doi.org/10.1155/2017/9364903}, 689 689 url = {https://doi.org/10.1155/2017/9364903},
abstract={Nowadays, large amounts of tourism information and services are available over the Web. This makes it difficult for the user to search for some specific information such as selecting a tour in a given city as an ordered set of points of interest. Moreover, the user rarely knows all his needs upfront and his preferences may change during a recommendation process. The user may also have a limited number of initial ratings and most often the recommender system is likely to face the well-known cold start problem. The objective of the research presented in this paper is to introduce a hybrid interactive context-aware tourism recommender system that takes into account user’s feedbacks and additional contextual information. It offers personalized tours to the user based on his preferences thanks to the combination of a case based reasoning framework and an artificial neural network. The proposed method has been tried in the city of Tehran in Iran. The results show that the proposed method outperforms current artificial neural network methods and combinations of case based reasoning with <svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" style="vertical-align:-0.2063999pt" id="M1" height="9.49473pt" version="1.1" viewBox="-0.0498162 -9.28833 6.66314 9.49473" width="6.66314pt"><g transform="matrix(.013,0,0,-0.013,0,0)"><path id="g113-108" d="M480 416C480 431 465 448 438 448C388 448 312 383 252 330C217 299 188 273 155 237H153L257 680C262 700 263 712 253 712C240 712 183 684 97 674L92 648L126 647C166 646 172 645 163 606L23 -6L29 -12C51 -5 77 2 107 8C115 62 130 128 142 180C153 193 179 220 204 241C231 170 259 106 288 54C317 0 336 -12 358 -12C381 -12 423 2 477 80L460 100C434 74 408 54 398 54C385 54 374 65 351 107C326 154 282 241 263 299C296 332 351 377 403 377C424 377 436 372 445 368C449 366 456 368 462 375C472 386 480 402 480 416Z"/></g></svg>-nearest neighbor methods in terms of user effort, accuracy, and user satisfaction.} 690 690 abstract={Nowadays, large amounts of tourism information and services are available over the Web. This makes it difficult for the user to search for some specific information such as selecting a tour in a given city as an ordered set of points of interest. Moreover, the user rarely knows all his needs upfront and his preferences may change during a recommendation process. The user may also have a limited number of initial ratings and most often the recommender system is likely to face the well-known cold start problem. The objective of the research presented in this paper is to introduce a hybrid interactive context-aware tourism recommender system that takes into account user’s feedbacks and additional contextual information. It offers personalized tours to the user based on his preferences thanks to the combination of a case based reasoning framework and an artificial neural network. The proposed method has been tried in the city of Tehran in Iran. The results show that the proposed method outperforms current artificial neural network methods and combinations of case based reasoning with <svg xmlns:xlink="http://www.w3.org/1999/xlink" xmlns="http://www.w3.org/2000/svg" style="vertical-align:-0.2063999pt" id="M1" height="9.49473pt" version="1.1" viewBox="-0.0498162 -9.28833 6.66314 9.49473" width="6.66314pt"><g transform="matrix(.013,0,0,-0.013,0,0)"><path id="g113-108" d="M480 416C480 431 465 448 438 448C388 448 312 383 252 330C217 299 188 273 155 237H153L257 680C262 700 263 712 253 712C240 712 183 684 97 674L92 648L126 647C166 646 172 645 163 606L23 -6L29 -12C51 -5 77 2 107 8C115 62 130 128 142 180C153 193 179 220 204 241C231 170 259 106 288 54C317 0 336 -12 358 -12C381 -12 423 2 477 80L460 100C434 74 408 54 398 54C385 54 374 65 351 107C326 154 282 241 263 299C296 332 351 377 403 377C424 377 436 372 445 368C449 366 456 368 462 375C472 386 480 402 480 416Z"/></g></svg>-nearest neighbor methods in terms of user effort, accuracy, and user satisfaction.}
} 691 691 }
692 692
@Thesis{Daubias2011, 693 693 @Thesis{Daubias2011,
author={Sthéphanie Jean-Daubias}, 694 694 author={Sthéphanie Jean-Daubias},
title={Ingénierie des profils d'apprenants}, 695 695 title={Ingénierie des profils d'apprenants},
type={diplomathesis}, 696 696 type={diplomathesis},
language={French}, 697 697 language={French},
institution={Université Claude Bernard Lyon 1}, 698 698 institution={Université Claude Bernard Lyon 1},
year={2011} 699 699 year={2011}
} 700 700 }
701 701
@article{Tapalova, 702 702 @article{Tapalova,
author = {Olga Tapalova and Nadezhda Zhiyenbayeva}, 703 703 author = {Olga Tapalova and Nadezhda Zhiyenbayeva},
title ={Artificial Intelligence in Education: AIEd for Personalised Learning Pathways}, 704 704 title ={Artificial Intelligence in Education: AIEd for Personalised Learning Pathways},
journal = {Electronic Journal of e-Learning}, 705 705 journal = {Electronic Journal of e-Learning},
volume = {}, 706 706 volume = {},
number = {}, 707 707 number = {},
pages = {15}, 708 708 pages = {15},
year = {2022}, 709 709 year = {2022},
URL = {https://eric.ed.gov/?q=Artificial+Intelligence+in+Education%3a+AIEd+for+Personalised+Learning+Pathways&id=EJ1373006}, 710 710 URL = {https://eric.ed.gov/?q=Artificial+Intelligence+in+Education%3a+AIEd+for+Personalised+Learning+Pathways&id=EJ1373006},
language={English}, 711 711 language={English},
type={article}, 712 712 type={article},
abstract = {Artificial intelligence is the driving force of change focusing on the needs and demands of the student. The research explores Artificial Intelligence in Education (AIEd) for building personalised learning systems for students. The research investigates and proposes a framework for AIEd: social networking sites and chatbots, expert systems for education, intelligent mentors and agents, machine learning, personalised educational systems and virtual educational environments. These technologies help educators to develop and introduce personalised approaches to master new knowledge and develop professional competencies. The research presents a case study of AIEd implementation in education. The scholars conducted the experiment in educational establishments using artificial intelligence in the curriculum. The scholars surveyed 184 second-year students of the Institute of Pedagogy and Psychology at the Abay Kazakh National Pedagogical University and the Kuban State Technological University to collect the data. The scholars considered the collective group discussions regarding the application of artificial intelligence in education to improve the effectiveness of learning. The research identified key advantages to creating personalised learning pathways such as access to training in 24/7 mode, training in virtual contexts, adaptation of educational content to personal needs of students, real-time and regular feedback, improvements in the educational process and mental stimulations. The proposed education paradigm reflects the increasing role of artificial intelligence in socio-economic life, the social and ethical concerns artificial intelligence may pose to humanity and its role in the digitalisation of education. The current article may be used as a theoretical framework for many educational institutions planning to exploit the capabilities of artificial intelligence in their adaptation to personalized learning.} 713 713 abstract = {Artificial intelligence is the driving force of change focusing on the needs and demands of the student. The research explores Artificial Intelligence in Education (AIEd) for building personalised learning systems for students. The research investigates and proposes a framework for AIEd: social networking sites and chatbots, expert systems for education, intelligent mentors and agents, machine learning, personalised educational systems and virtual educational environments. These technologies help educators to develop and introduce personalised approaches to master new knowledge and develop professional competencies. The research presents a case study of AIEd implementation in education. The scholars conducted the experiment in educational establishments using artificial intelligence in the curriculum. The scholars surveyed 184 second-year students of the Institute of Pedagogy and Psychology at the Abay Kazakh National Pedagogical University and the Kuban State Technological University to collect the data. The scholars considered the collective group discussions regarding the application of artificial intelligence in education to improve the effectiveness of learning. The research identified key advantages to creating personalised learning pathways such as access to training in 24/7 mode, training in virtual contexts, adaptation of educational content to personal needs of students, real-time and regular feedback, improvements in the educational process and mental stimulations. The proposed education paradigm reflects the increasing role of artificial intelligence in socio-economic life, the social and ethical concerns artificial intelligence may pose to humanity and its role in the digitalisation of education. The current article may be used as a theoretical framework for many educational institutions planning to exploit the capabilities of artificial intelligence in their adaptation to personalized learning.}
} 714 714 }
715 715
@article{Auer, 716 716 @article{Auer,
title = {From monolithic systems to Microservices: An assessment framework}, 717 717 title = {From monolithic systems to Microservices: An assessment framework},
journal = {Information and Software Technology}, 718 718 journal = {Information and Software Technology},
volume = {137}, 719 719 volume = {137},
pages = {106600}, 720 720 pages = {106600},
year = {2021}, 721 721 year = {2021},
issn = {0950-5849}, 722 722 issn = {0950-5849},
doi = {https://doi.org/10.1016/j.infsof.2021.106600}, 723 723 doi = {https://doi.org/10.1016/j.infsof.2021.106600},
url = {https://www.sciencedirect.com/science/article/pii/S0950584921000793}, 724 724 url = {https://www.sciencedirect.com/science/article/pii/S0950584921000793},
author = {Florian Auer and Valentina Lenarduzzi and Michael Felderer and Davide Taibi}, 725 725 author = {Florian Auer and Valentina Lenarduzzi and Michael Felderer and Davide Taibi},
keywords = {Microservices, Cloud migration, Software measurement}, 726 726 keywords = {Microservices, Cloud migration, Software measurement},
abstract = {Context: 727 727 abstract = {Context:
Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. 728 728 Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings.
Objective: 729 729 Objective:
The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. 730 730 The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system.
Method: 731 731 Method:
We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. 732 732 We conducted a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory.
Results: 733 733 Results:
We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information.} 734 734 We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information.}
} 735 735 }
736 736
@Article{jmse10040464, 737 737 @Article{jmse10040464,
AUTHOR = {Zuluaga, Carlos A. and Aristizábal, Luis M. and Rúa, Santiago and Franco, Diego A. and Osorio, Dorie A. and Vásquez, Rafael E.}, 738 738 AUTHOR = {Zuluaga, Carlos A. and Aristizábal, Luis M. and Rúa, Santiago and Franco, Diego A. and Osorio, Dorie A. and Vásquez, Rafael E.},
TITLE = {Development of a Modular Software Architecture for Underwater Vehicles Using Systems Engineering}, 739 739 TITLE = {Development of a Modular Software Architecture for Underwater Vehicles Using Systems Engineering},
JOURNAL = {Journal of Marine Science and Engineering}, 740 740 JOURNAL = {Journal of Marine Science and Engineering},
VOLUME = {10}, 741 741 VOLUME = {10},
YEAR = {2022}, 742 742 YEAR = {2022},
NUMBER = {4}, 743 743 NUMBER = {4},
ARTICLE-NUMBER = {464}, 744 744 ARTICLE-NUMBER = {464},
URL = {https://www.mdpi.com/2077-1312/10/4/464}, 745 745 URL = {https://www.mdpi.com/2077-1312/10/4/464},
ISSN = {2077-1312}, 746 746 ISSN = {2077-1312},
ABSTRACT = {This paper addresses the development of a modular software architecture for the design/construction/operation of a remotely operated vehicle (ROV), based on systems engineering. First, systems engineering and the Vee model are presented with the objective of defining the interactions of the stakeholders with the software architecture development team and establishing the baselines that must be met in each development phase. In the development stage, the definition of the architecture and its connection with the hardware is presented, taking into account the use of the actor model, which represents the high-level software architecture used to solve concurrency problems. Subsequently, the structure of the classes is defined both at high and low levels in the instruments using the object-oriented programming paradigm. Finally, unit tests are developed for each component in the software architecture, quality assessment tests are implemented for system functions fulfillment, and a field sea trial for testing different modules of the vehicle is described. This approach is well suited for the development of complex systems such as marine vehicles and those systems which require scalability and modularity to add functionalities.}, 747 747 ABSTRACT = {This paper addresses the development of a modular software architecture for the design/construction/operation of a remotely operated vehicle (ROV), based on systems engineering. First, systems engineering and the Vee model are presented with the objective of defining the interactions of the stakeholders with the software architecture development team and establishing the baselines that must be met in each development phase. In the development stage, the definition of the architecture and its connection with the hardware is presented, taking into account the use of the actor model, which represents the high-level software architecture used to solve concurrency problems. Subsequently, the structure of the classes is defined both at high and low levels in the instruments using the object-oriented programming paradigm. Finally, unit tests are developed for each component in the software architecture, quality assessment tests are implemented for system functions fulfillment, and a field sea trial for testing different modules of the vehicle is described. This approach is well suited for the development of complex systems such as marine vehicles and those systems which require scalability and modularity to add functionalities.},
DOI = {10.3390/jmse10040464} 748 748 DOI = {10.3390/jmse10040464}
} 749 749 }
750 750
@article{doi:10.1177/1754337116651013, 751 751 @article{doi:10.1177/1754337116651013,
author = {Julien Henriet and Lang Christophe and Philippe Laurent}, 752 752 author = {Julien Henriet and Lang Christophe and Philippe Laurent},
title ={Artificial Intelligence-Virtual Trainer: An educative system based on artificial intelligence and designed to produce varied and consistent training lessons}, 753 753 title ={Artificial Intelligence-Virtual Trainer: An educative system based on artificial intelligence and designed to produce varied and consistent training lessons},
journal = {Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology}, 754 754 journal = {Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology},
volume = {231}, 755 755 volume = {231},
number = {2}, 756 756 number = {2},
pages = {110-124}, 757 757 pages = {110-124},
year = {2017}, 758 758 year = {2017},
doi = {10.1177/1754337116651013}, 759 759 doi = {10.1177/1754337116651013},
URL = {https://doi.org/10.1177/1754337116651013}, 760 760 URL = {https://doi.org/10.1177/1754337116651013},
eprint = {https://doi.org/10.1177/1754337116651013}, 761 761 eprint = {https://doi.org/10.1177/1754337116651013},
abstract = { AI-Virtual Trainer is an educative system using Artificial Intelligence to propose varied lessons to trainers. The agents of this multi-agent system apply case-based reasoning to build solutions by analogy. However, as required by the field, Artificial Intelligence-Virtual Trainer never proposes the same lesson twice, whereas the same objective may be set many times consecutively. The adaptation process of Artificial Intelligence-Virtual Trainer delivers an ordered set of exercises adapted to the objectives and sub-objectives chosen by trainers. This process has been enriched by including the notion of distance between exercises: the proposed tasks are not only appropriate but are hierarchically ordered. With this new version of the system, students are guided towards their objectives via an underlying theme. Finally, the agents responsible for the different parts of lessons collaborate with each other according to a dedicated protocol and decision-making policy since no exercise must appear more than once in the same lesson. The results prove that Artificial Intelligence-Virtual Trainer, however perfectible, meets the requirements of this field. } 762 762 abstract = { AI-Virtual Trainer is an educative system using Artificial Intelligence to propose varied lessons to trainers. The agents of this multi-agent system apply case-based reasoning to build solutions by analogy. However, as required by the field, Artificial Intelligence-Virtual Trainer never proposes the same lesson twice, whereas the same objective may be set many times consecutively. The adaptation process of Artificial Intelligence-Virtual Trainer delivers an ordered set of exercises adapted to the objectives and sub-objectives chosen by trainers. This process has been enriched by including the notion of distance between exercises: the proposed tasks are not only appropriate but are hierarchically ordered. With this new version of the system, students are guided towards their objectives via an underlying theme. Finally, the agents responsible for the different parts of lessons collaborate with each other according to a dedicated protocol and decision-making policy since no exercise must appear more than once in the same lesson. The results prove that Artificial Intelligence-Virtual Trainer, however perfectible, meets the requirements of this field. }
} 763 763 }
764 764
@InProceedings{10.1007/978-3-030-01081-2_9, 765 765 @InProceedings{10.1007/978-3-030-01081-2_9,
author="Henriet, Julien 766 766 author="Henriet, Julien
and Greffier, Fran{\c{c}}oise", 767 767 and Greffier, Fran{\c{c}}oise",
editor="Cox, Michael T. 768 768 editor="Cox, Michael T.
and Funk, Peter 769 769 and Funk, Peter
and Begum, Shahina", 770 770 and Begum, Shahina",
title="AI-VT: An Example of CBR that Generates a Variety of Solutions to the Same Problem", 771 771 title="AI-VT: An Example of CBR that Generates a Variety of Solutions to the Same Problem",
booktitle="Case-Based Reasoning Research and Development", 772 772 booktitle="Case-Based Reasoning Research and Development",
year="2018", 773 773 year="2018",
publisher="Springer International Publishing", 774 774 publisher="Springer International Publishing",
address="Cham", 775 775 address="Cham",
pages="124--139", 776 776 pages="124--139",
abstract="AI-Virtual Trainer (AI-VT) is an intelligent tutoring system based on case-based reasoning. AI-VT has been designed to generate personalised, varied, and consistent training sessions for learners. The AI-VT training sessions propose different exercises in regard to a capacity associated with sub-capacities. For example, in the field of training for algorithms, a capacity could be ``Use a control structure alternative'' and an associated sub-capacity could be ``Write a boolean condition''. AI-VT can elaborate a personalised list of exercises for each learner. One of the main requirements and challenges studied in this work is its ability to propose varied training sessions to the same learner for many weeks, which constitutes the challenge studied in our work. Indeed, if the same set of exercises is proposed time after time to learners, they will stop paying attention and lose motivation. Thus, even if the generation of training sessions is based on analogy and must integrate the repetition of some exercises, it also must introduce some diversity and AI-VT must deal with this diversity. In this paper, we have highlighted the fact that the retaining (or capitalisation) phase of CBR is of the utmost importance for diversity, and we have also highlighted that the equilibrium between repetition and variety depends on the abilities learned. This balance has an important impact on the retaining phase of AI-VT.", 777 777 abstract="AI-Virtual Trainer (AI-VT) is an intelligent tutoring system based on case-based reasoning. AI-VT has been designed to generate personalised, varied, and consistent training sessions for learners. The AI-VT training sessions propose different exercises in regard to a capacity associated with sub-capacities. For example, in the field of training for algorithms, a capacity could be ``Use a control structure alternative'' and an associated sub-capacity could be ``Write a boolean condition''. AI-VT can elaborate a personalised list of exercises for each learner. One of the main requirements and challenges studied in this work is its ability to propose varied training sessions to the same learner for many weeks, which constitutes the challenge studied in our work. Indeed, if the same set of exercises is proposed time after time to learners, they will stop paying attention and lose motivation. Thus, even if the generation of training sessions is based on analogy and must integrate the repetition of some exercises, it also must introduce some diversity and AI-VT must deal with this diversity. In this paper, we have highlighted the fact that the retaining (or capitalisation) phase of CBR is of the utmost importance for diversity, and we have also highlighted that the equilibrium between repetition and variety depends on the abilities learned. This balance has an important impact on the retaining phase of AI-VT.",
isbn="978-3-030-01081-2" 778 778 isbn="978-3-030-01081-2"
} 779 779 }
780 780
@article{BAKUROV2021100913, 781 781 @article{BAKUROV2021100913,
title = {Genetic programming for stacked generalization}, 782 782 title = {Genetic programming for stacked generalization},
journal = {Swarm and Evolutionary Computation}, 783 783 journal = {Swarm and Evolutionary Computation},
volume = {65}, 784 784 volume = {65},
pages = {100913}, 785 785 pages = {100913},
year = {2021}, 786 786 year = {2021},
issn = {2210-6502}, 787 787 issn = {2210-6502},
doi = {https://doi.org/10.1016/j.swevo.2021.100913}, 788 788 doi = {https://doi.org/10.1016/j.swevo.2021.100913},
url = {https://www.sciencedirect.com/science/article/pii/S2210650221000742}, 789 789 url = {https://www.sciencedirect.com/science/article/pii/S2210650221000742},
author = {Illya Bakurov and Mauro Castelli and Olivier Gau and Francesco Fontanella and Leonardo Vanneschi}, 790 790 author = {Illya Bakurov and Mauro Castelli and Olivier Gau and Francesco Fontanella and Leonardo Vanneschi},
keywords = {Genetic Programming, Stacking, Ensemble Learning, Stacked Generalization}, 791 791 keywords = {Genetic Programming, Stacking, Ensemble Learning, Stacked Generalization},
abstract = {In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.} 792 792 abstract = {In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.}
} 793 793 }
794 794
@article{Liang, 795 795 @article{Liang,
author={Liang Mang and Chang Tianpeng and An Bingxing and Duan Xinghai and Du Lili and Wang Xiaoqiao and Miao Jian and Xu Lingyang and Gao Xue and Zhang Lupei and Li Junya and Gao Huijiang}, 796 796 author={Liang Mang and Chang Tianpeng and An Bingxing and Duan Xinghai and Du Lili and Wang Xiaoqiao and Miao Jian and Xu Lingyang and Gao Xue and Zhang Lupei and Li Junya and Gao Huijiang},
Title={A Stacking Ensemble Learning Framework for Genomic Prediction}, 797 797 Title={A Stacking Ensemble Learning Framework for Genomic Prediction},
Journal={Frontiers in Genetics}, 798 798 Journal={Frontiers in Genetics},
year={2021}, 799 799 year={2021},
doi ={10.3389/fgene.2021.600040}, 800 800 doi ={10.3389/fgene.2021.600040},
PMID={33747037}, 801 801 PMID={33747037},
PMCID={PMC7969712} 802 802 PMCID={PMC7969712}
} 803 803 }
804 804
@Article{cmc.2023.033417, 805 805 @Article{cmc.2023.033417,
AUTHOR = {Jeonghoon Choi and Dongjun Suh and Marc-Oliver Otto}, 806 806 AUTHOR = {Jeonghoon Choi and Dongjun Suh and Marc-Oliver Otto},
TITLE = {Boosted Stacking Ensemble Machine Learning Method for Wafer Map Pattern Classification}, 807 807 TITLE = {Boosted Stacking Ensemble Machine Learning Method for Wafer Map Pattern Classification},
JOURNAL = {Computers, Materials \& Continua}, 808 808 JOURNAL = {Computers, Materials \& Continua},
VOLUME = {74}, 809 809 VOLUME = {74},
YEAR = {2023}, 810 810 YEAR = {2023},
NUMBER = {2}, 811 811 NUMBER = {2},
PAGES = {2945--2966}, 812 812 PAGES = {2945--2966},
URL = {http://www.techscience.com/cmc/v74n2/50296}, 813 813 URL = {http://www.techscience.com/cmc/v74n2/50296},
ISSN = {1546-2226}, 814 814 ISSN = {1546-2226},
ABSTRACT = {Recently, machine learning-based technologies have been developed to automate the classification of wafer map defect patterns during semiconductor manufacturing. The existing approaches used in the wafer map pattern classification include directly learning the image through a convolution neural network and applying the ensemble method after extracting image features. This study aims to classify wafer map defects more effectively and derive robust algorithms even for datasets with insufficient defect patterns. First, the number of defects during the actual process may be limited. Therefore, insufficient data are generated using convolutional auto-encoder (CAE), and the expanded data are verified using the evaluation technique of structural similarity index measure (SSIM). After extracting handcrafted features, a boosted stacking ensemble model that integrates the four base-level classifiers with the extreme gradient boosting classifier as a meta-level classifier is designed and built for training the model based on the expanded data for final prediction. Since the proposed algorithm shows better performance than those of existing ensemble classifiers even for insufficient defect patterns, the results of this study will contribute to improving the product quality and yield of the actual semiconductor manufacturing process.}, 815 815 ABSTRACT = {Recently, machine learning-based technologies have been developed to automate the classification of wafer map defect patterns during semiconductor manufacturing. The existing approaches used in the wafer map pattern classification include directly learning the image through a convolution neural network and applying the ensemble method after extracting image features. This study aims to classify wafer map defects more effectively and derive robust algorithms even for datasets with insufficient defect patterns. First, the number of defects during the actual process may be limited. Therefore, insufficient data are generated using convolutional auto-encoder (CAE), and the expanded data are verified using the evaluation technique of structural similarity index measure (SSIM). After extracting handcrafted features, a boosted stacking ensemble model that integrates the four base-level classifiers with the extreme gradient boosting classifier as a meta-level classifier is designed and built for training the model based on the expanded data for final prediction. Since the proposed algorithm shows better performance than those of existing ensemble classifiers even for insufficient defect patterns, the results of this study will contribute to improving the product quality and yield of the actual semiconductor manufacturing process.},
DOI = {10.32604/cmc.2023.033417} 816 816 DOI = {10.32604/cmc.2023.033417}
} 817 817 }
818 818
@ARTICLE{10.3389/fgene.2021.600040, 819 819 @ARTICLE{10.3389/fgene.2021.600040,
AUTHOR={Liang, Mang and Chang, Tianpeng and An, Bingxing and Duan, Xinghai and Du, Lili and Wang, Xiaoqiao and Miao, Jian and Xu, Lingyang and Gao, Xue and Zhang, Lupei and Li, Junya and Gao, Huijiang}, 820 820 AUTHOR={Liang, Mang and Chang, Tianpeng and An, Bingxing and Duan, Xinghai and Du, Lili and Wang, Xiaoqiao and Miao, Jian and Xu, Lingyang and Gao, Xue and Zhang, Lupei and Li, Junya and Gao, Huijiang},
TITLE={A Stacking Ensemble Learning Framework for Genomic Prediction}, 821 821 TITLE={A Stacking Ensemble Learning Framework for Genomic Prediction},
JOURNAL={Frontiers in Genetics}, 822 822 JOURNAL={Frontiers in Genetics},
VOLUME={12}, 823 823 VOLUME={12},
YEAR={2021}, 824 824 YEAR={2021},
URL={https://www.frontiersin.org/articles/10.3389/fgene.2021.600040}, 825 825 URL={https://www.frontiersin.org/articles/10.3389/fgene.2021.600040},
DOI={10.3389/fgene.2021.600040}, 826 826 DOI={10.3389/fgene.2021.600040},
ISSN={1664-8021}, 827 827 ISSN={1664-8021},
ABSTRACT={Machine learning (ML) is perhaps the most useful tool for the interpretation of large genomic datasets. However, the performance of a single machine learning method in genomic selection (GS) is currently unsatisfactory. To improve the genomic predictions, we constructed a stacking ensemble learning framework (SELF), integrating three machine learning methods, to predict genomic estimated breeding values (GEBVs). The present study evaluated the prediction ability of SELF by analyzing three real datasets, with different genetic architecture; comparing the prediction accuracy of SELF, base learners, genomic best linear unbiased prediction (GBLUP) and BayesB. For each trait, SELF performed better than base learners, which included support vector regression (SVR), kernel ridge regression (KRR) and elastic net (ENET). The prediction accuracy of SELF was, on average, 7.70% higher than GBLUP in three datasets. Except for the milk fat percentage (MFP) traits, of the German Holstein dairy cattle dataset, SELF was more robust than BayesB in all remaining traits. Therefore, we believed that SEFL has the potential to be promoted to estimate GEBVs in other animals and plants.} 828 828 ABSTRACT={Machine learning (ML) is perhaps the most useful tool for the interpretation of large genomic datasets. However, the performance of a single machine learning method in genomic selection (GS) is currently unsatisfactory. To improve the genomic predictions, we constructed a stacking ensemble learning framework (SELF), integrating three machine learning methods, to predict genomic estimated breeding values (GEBVs). The present study evaluated the prediction ability of SELF by analyzing three real datasets, with different genetic architecture; comparing the prediction accuracy of SELF, base learners, genomic best linear unbiased prediction (GBLUP) and BayesB. For each trait, SELF performed better than base learners, which included support vector regression (SVR), kernel ridge regression (KRR) and elastic net (ENET). The prediction accuracy of SELF was, on average, 7.70% higher than GBLUP in three datasets. Except for the milk fat percentage (MFP) traits, of the German Holstein dairy cattle dataset, SELF was more robust than BayesB in all remaining traits. Therefore, we believed that SEFL has the potential to be promoted to estimate GEBVs in other animals and plants.}
} 829 829 }
830 830
@article{DIDDEN2023338, 831 831 @article{DIDDEN2023338,
title = {Decentralized learning multi-agent system for online machine shop scheduling problem}, 832 832 title = {Decentralized learning multi-agent system for online machine shop scheduling problem},
journal = {Journal of Manufacturing Systems}, 833 833 journal = {Journal of Manufacturing Systems},
volume = {67}, 834 834 volume = {67},
pages = {338-360}, 835 835 pages = {338-360},
year = {2023}, 836 836 year = {2023},
issn = {0278-6125}, 837 837 issn = {0278-6125},
doi = {https://doi.org/10.1016/j.jmsy.2023.02.004}, 838 838 doi = {https://doi.org/10.1016/j.jmsy.2023.02.004},
url = {https://www.sciencedirect.com/science/article/pii/S0278612523000286}, 839 839 url = {https://www.sciencedirect.com/science/article/pii/S0278612523000286},
author = {Jeroen B.H.C. Didden and Quang-Vinh Dang and Ivo J.B.F. Adan}, 840 840 author = {Jeroen B.H.C. Didden and Quang-Vinh Dang and Ivo J.B.F. Adan},
keywords = {Multi-agent system, Decentralized systems, Learning algorithm, Industry 4.0, Smart manufacturing}, 841 841 keywords = {Multi-agent system, Decentralized systems, Learning algorithm, Industry 4.0, Smart manufacturing},
abstract = {Customer profiles have rapidly changed over the past few years, with products being requested with more customization and with lower demand. In addition to the advances in technologies owing to Industry 4.0, manufacturers explore autonomous and smart factories. This paper proposes a decentralized multi-agent system (MAS), including intelligent agents that can respond to their environment autonomously through learning capabilities, to cope with an online machine shop scheduling problem. In the proposed system, agents participate in auctions to receive jobs to process, learn how to bid for jobs correctly, and decide when to start processing a job. The objective is to minimize the mean weighted tardiness of all jobs. In contrast to the existing literature, the proposed MAS is assessed on its learning capabilities, producing novel insights concerning what is relevant for learning, when re-learning is needed, and system response to dynamic events (such as rush jobs, increase in processing time, and machine unavailability). Computational experiments also reveal the outperformance of the proposed MAS to other multi-agent systems by at least 25% and common dispatching rules in mean weighted tardiness, as well as other performance measures.} 842 842 abstract = {Customer profiles have rapidly changed over the past few years, with products being requested with more customization and with lower demand. In addition to the advances in technologies owing to Industry 4.0, manufacturers explore autonomous and smart factories. This paper proposes a decentralized multi-agent system (MAS), including intelligent agents that can respond to their environment autonomously through learning capabilities, to cope with an online machine shop scheduling problem. In the proposed system, agents participate in auctions to receive jobs to process, learn how to bid for jobs correctly, and decide when to start processing a job. The objective is to minimize the mean weighted tardiness of all jobs. In contrast to the existing literature, the proposed MAS is assessed on its learning capabilities, producing novel insights concerning what is relevant for learning, when re-learning is needed, and system response to dynamic events (such as rush jobs, increase in processing time, and machine unavailability). Computational experiments also reveal the outperformance of the proposed MAS to other multi-agent systems by at least 25% and common dispatching rules in mean weighted tardiness, as well as other performance measures.}
} 843 843 }
844 844
@article{REZAEI20221, 845 845 @article{REZAEI20221,
title = {A Biased Inferential Naivety learning model for a network of agents}, 846 846 title = {A Biased Inferential Naivety learning model for a network of agents},
journal = {Cognitive Systems Research}, 847 847 journal = {Cognitive Systems Research},
volume = {76}, 848 848 volume = {76},
pages = {1-12}, 849 849 pages = {1-12},
year = {2022}, 850 850 year = {2022},
issn = {1389-0417}, 851 851 issn = {1389-0417},
doi = {https://doi.org/10.1016/j.cogsys.2022.07.001}, 852 852 doi = {https://doi.org/10.1016/j.cogsys.2022.07.001},
url = {https://www.sciencedirect.com/science/article/pii/S1389041722000298}, 853 853 url = {https://www.sciencedirect.com/science/article/pii/S1389041722000298},
author = {Zeinab Rezaei and Saeed Setayeshi and Ebrahim Mahdipour}, 854 854 author = {Zeinab Rezaei and Saeed Setayeshi and Ebrahim Mahdipour},
keywords = {Bayesian decision making, Heuristic method, Inferential naivety assumption, Observational learning, Social learning}, 855 855 keywords = {Bayesian decision making, Heuristic method, Inferential naivety assumption, Observational learning, Social learning},
abstract = {We propose a Biased Inferential Naivety social learning model. In this model, a group of agents tries to determine the true state of the world and make the best possible decisions. The agents have limited computational abilities. They receive noisy private signals about the true state and observe the history of their neighbors' decisions. The proposed model is rooted in the Bayesian method but avoids the complexity of fully Bayesian inference. In our model, the role of knowledge obtained from social observations is separated from the knowledge obtained from private observations. Therefore, the Bayesian inferences on social observations are approximated using inferential naivety assumption, while purely Bayesian inferences are made on private observations. The reduction of herd behavior is another innovation of the proposed model. This advantage is achieved by reducing the effect of social observations on agents' beliefs over time. Therefore, all the agents learn the truth, and the correct consensus is achieved effectively. In this model, using two cognitive biases, there is heterogeneity in agents' behaviors. Therefore, the growth of beliefs and the learning speed can be improved in different situations. Several Monte Carlo simulations confirm the features of the proposed model. The conditions under which the proposed model leads to asymptotic learning are proved.} 856 856 abstract = {We propose a Biased Inferential Naivety social learning model. In this model, a group of agents tries to determine the true state of the world and make the best possible decisions. The agents have limited computational abilities. They receive noisy private signals about the true state and observe the history of their neighbors' decisions. The proposed model is rooted in the Bayesian method but avoids the complexity of fully Bayesian inference. In our model, the role of knowledge obtained from social observations is separated from the knowledge obtained from private observations. Therefore, the Bayesian inferences on social observations are approximated using inferential naivety assumption, while purely Bayesian inferences are made on private observations. The reduction of herd behavior is another innovation of the proposed model. This advantage is achieved by reducing the effect of social observations on agents' beliefs over time. Therefore, all the agents learn the truth, and the correct consensus is achieved effectively. In this model, using two cognitive biases, there is heterogeneity in agents' behaviors. Therefore, the growth of beliefs and the learning speed can be improved in different situations. Several Monte Carlo simulations confirm the features of the proposed model. The conditions under which the proposed model leads to asymptotic learning are proved.}
} 857 857 }
858 858
@article{KAMALI2023110242, 859 859 @article{KAMALI2023110242,
title = {An immune inspired multi-agent system for dynamic multi-objective optimization}, 860 860 title = {An immune inspired multi-agent system for dynamic multi-objective optimization},
journal = {Knowledge-Based Systems}, 861 861 journal = {Knowledge-Based Systems},
volume = {262}, 862 862 volume = {262},
pages = {110242}, 863 863 pages = {110242},
year = {2023}, 864 864 year = {2023},
issn = {0950-7051}, 865 865 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2022.110242}, 866 866 doi = {https://doi.org/10.1016/j.knosys.2022.110242},
url = {https://www.sciencedirect.com/science/article/pii/S0950705122013387}, 867 867 url = {https://www.sciencedirect.com/science/article/pii/S0950705122013387},
author = {Seyed Ruhollah Kamali and Touraj Banirostam and Homayun Motameni and Mohammad Teshnehlab}, 868 868 author = {Seyed Ruhollah Kamali and Touraj Banirostam and Homayun Motameni and Mohammad Teshnehlab},
keywords = {Immune inspired multi-agent system, Dynamic multi-objective optimization, Severe and frequent changes}, 869 869 keywords = {Immune inspired multi-agent system, Dynamic multi-objective optimization, Severe and frequent changes},
abstract = {In this research, an immune inspired multi-agent system (IMAS) is proposed to solve optimization problems in dynamic and multi-objective environments. The proposed IMAS uses artificial immune system metaphors to shape the local behaviors of agents to detect environmental changes, generate Pareto optimal solutions, and react to the dynamics of the problem environment. Apart from that, agents enhance their adaptive capacity in dealing with environmental changes to find the global optimum, with a hierarchical structure without any central control. This study used a combination of diversity-, multi-population- and memory-based approaches to perform better in multi-objective environments with severe and frequent changes. The proposed IMAS is compared with six state-of-the-art algorithms on various benchmark problems. The results indicate its superiority in many of the experiments.} 870 870 abstract = {In this research, an immune inspired multi-agent system (IMAS) is proposed to solve optimization problems in dynamic and multi-objective environments. The proposed IMAS uses artificial immune system metaphors to shape the local behaviors of agents to detect environmental changes, generate Pareto optimal solutions, and react to the dynamics of the problem environment. Apart from that, agents enhance their adaptive capacity in dealing with environmental changes to find the global optimum, with a hierarchical structure without any central control. This study used a combination of diversity-, multi-population- and memory-based approaches to perform better in multi-objective environments with severe and frequent changes. The proposed IMAS is compared with six state-of-the-art algorithms on various benchmark problems. The results indicate its superiority in many of the experiments.}
} 871 871 }
872 872
@article{ZHANG2023110564, 873 873 @article{ZHANG2023110564,
title = {A novel human learning optimization algorithm with Bayesian inference learning}, 874 874 title = {A novel human learning optimization algorithm with Bayesian inference learning},
journal = {Knowledge-Based Systems}, 875 875 journal = {Knowledge-Based Systems},
volume = {271}, 876 876 volume = {271},
pages = {110564}, 877 877 pages = {110564},
year = {2023}, 878 878 year = {2023},
issn = {0950-7051}, 879 879 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2023.110564}, 880 880 doi = {https://doi.org/10.1016/j.knosys.2023.110564},
url = {https://www.sciencedirect.com/science/article/pii/S0950705123003143}, 881 881 url = {https://www.sciencedirect.com/science/article/pii/S0950705123003143},
author = {Pinggai Zhang and Ling Wang and Zixiang Fei and Lisheng Wei and Minrui Fei and Muhammad Ilyas Menhas}, 882 882 author = {Pinggai Zhang and Ling Wang and Zixiang Fei and Lisheng Wei and Minrui Fei and Muhammad Ilyas Menhas},
keywords = {Human learning optimization, Meta-heuristic, Bayesian inference, Bayesian inference learning, Individual learning, Social learning}, 883 883 keywords = {Human learning optimization, Meta-heuristic, Bayesian inference, Bayesian inference learning, Individual learning, Social learning},
abstract = {Humans perform Bayesian inference in a wide variety of tasks, which can help people make selection decisions effectively and therefore enhances learning efficiency and accuracy. Inspired by this fact, this paper presents a novel human learning optimization algorithm with Bayesian inference learning (HLOBIL), in which a Bayesian inference learning operator (BILO) is developed to utilize the inference strategy for enhancing learning efficiency. The in-depth analysis shows that the proposed BILO can efficiently improve the exploitation ability of the algorithm as it can achieve the optimal values and retrieve the optimal information with the accumulated search information. Besides, the exploration ability of HLOBIL is also strengthened by the inborn characteristics of Bayesian inference. The experimental results demonstrate that the developed HLOBIL is superior to previous HLO variants and other state-of-art algorithms with its improved exploitation and exploration abilities.} 884 884 abstract = {Humans perform Bayesian inference in a wide variety of tasks, which can help people make selection decisions effectively and therefore enhances learning efficiency and accuracy. Inspired by this fact, this paper presents a novel human learning optimization algorithm with Bayesian inference learning (HLOBIL), in which a Bayesian inference learning operator (BILO) is developed to utilize the inference strategy for enhancing learning efficiency. The in-depth analysis shows that the proposed BILO can efficiently improve the exploitation ability of the algorithm as it can achieve the optimal values and retrieve the optimal information with the accumulated search information. Besides, the exploration ability of HLOBIL is also strengthened by the inborn characteristics of Bayesian inference. The experimental results demonstrate that the developed HLOBIL is superior to previous HLO variants and other state-of-art algorithms with its improved exploitation and exploration abilities.}
} 885 885 }
886 886
@article{HIPOLITO2023103510, 887 887 @article{HIPOLITO2023103510,
title = {Breaking boundaries: The Bayesian Brain Hypothesis for perception and prediction}, 888 888 title = {Breaking boundaries: The Bayesian Brain Hypothesis for perception and prediction},
journal = {Consciousness and Cognition}, 889 889 journal = {Consciousness and Cognition},
volume = {111}, 890 890 volume = {111},
pages = {103510}, 891 891 pages = {103510},
year = {2023}, 892 892 year = {2023},
issn = {1053-8100}, 893 893 issn = {1053-8100},
doi = {https://doi.org/10.1016/j.concog.2023.103510}, 894 894 doi = {https://doi.org/10.1016/j.concog.2023.103510},
url = {https://www.sciencedirect.com/science/article/pii/S1053810023000478}, 895 895 url = {https://www.sciencedirect.com/science/article/pii/S1053810023000478},
author = {Inês Hipólito and Michael Kirchhoff}, 896 896 author = {Inês Hipólito and Michael Kirchhoff},
keywords = {Bayesian Brain Hypothesis, Modularity of the Mind, Cognitive processes, Informational boundaries}, 897 897 keywords = {Bayesian Brain Hypothesis, Modularity of the Mind, Cognitive processes, Informational boundaries},
abstract = {This special issue aims to provide a comprehensive overview of the current state of the Bayesian Brain Hypothesis and its standing across neuroscience, cognitive science and the philosophy of cognitive science. By gathering cutting-edge research from leading experts, this issue seeks to showcase the latest advancements in our understanding of the Bayesian brain, as well as its potential implications for future research in perception, cognition, and motor control. A special focus to achieve this aim is adopted in this special issue, as it seeks to explore the relation between two seemingly incompatible frameworks for the understanding of cognitive structure and function: the Bayesian Brain Hypothesis and the Modularity Theory of the Mind. In assessing the compatibility between these theories, the contributors to this special issue open up new pathways of thinking and advance our understanding of cognitive processes.} 898 898 abstract = {This special issue aims to provide a comprehensive overview of the current state of the Bayesian Brain Hypothesis and its standing across neuroscience, cognitive science and the philosophy of cognitive science. By gathering cutting-edge research from leading experts, this issue seeks to showcase the latest advancements in our understanding of the Bayesian brain, as well as its potential implications for future research in perception, cognition, and motor control. A special focus to achieve this aim is adopted in this special issue, as it seeks to explore the relation between two seemingly incompatible frameworks for the understanding of cognitive structure and function: the Bayesian Brain Hypothesis and the Modularity Theory of the Mind. In assessing the compatibility between these theories, the contributors to this special issue open up new pathways of thinking and advance our understanding of cognitive processes.}
} 899 899 }
900 900
@article{LI2023424, 901 901 @article{LI2023424,
title = {Multi-agent evolution reinforcement learning method for machining parameters optimization based on bootstrap aggregating graph attention network simulated environment}, 902 902 title = {Multi-agent evolution reinforcement learning method for machining parameters optimization based on bootstrap aggregating graph attention network simulated environment},
journal = {Journal of Manufacturing Systems}, 903 903 journal = {Journal of Manufacturing Systems},
volume = {67}, 904 904 volume = {67},
pages = {424-438}, 905 905 pages = {424-438},
year = {2023}, 906 906 year = {2023},
issn = {0278-6125}, 907 907 issn = {0278-6125},
doi = {https://doi.org/10.1016/j.jmsy.2023.02.015}, 908 908 doi = {https://doi.org/10.1016/j.jmsy.2023.02.015},
url = {https://www.sciencedirect.com/science/article/pii/S0278612523000390}, 909 909 url = {https://www.sciencedirect.com/science/article/pii/S0278612523000390},
author = {Weiye Li and Songping He and Xinyong Mao and Bin Li and Chaochao Qiu and Jinwen Yu and Fangyu Peng and Xin Tan}, 910 910 author = {Weiye Li and Songping He and Xinyong Mao and Bin Li and Chaochao Qiu and Jinwen Yu and Fangyu Peng and Xin Tan},
keywords = {Surface roughness, Cutting efficiency, Machining parameters optimization, Graph attention network, Multi-agent reinforcement learning, Evolutionary learning}, 911 911 keywords = {Surface roughness, Cutting efficiency, Machining parameters optimization, Graph attention network, Multi-agent reinforcement learning, Evolutionary learning},
abstract = {Improving machining quality and production efficiency is the focus of the manufacturing industry. How to obtain efficient machining parameters under multiple constraints such as machining quality is a severe challenge for manufacturing industry. In this paper, a multi-agent evolutionary reinforcement learning method (MAERL) is proposed to optimize the machining parameters for high quality and high efficiency machining by combining the graph neural network and reinforcement learning. Firstly, a bootstrap aggregating graph attention network (Bagging-GAT) based roughness estimation method for machined surface is proposed, which combines the structural knowledge between machining parameters and vibration features. Secondly, a mathematical model of machining parameters optimization problem is established, which is formalized into Markov decision process (MDP), and a multi-agent reinforcement learning method is proposed to solve the MDP problem, and evolutionary learning is introduced to improve the stability of multi-agent training. Finally, a series of experiments were carried out on the commutator production line, and the results show that the proposed Bagging-GAT-based method can improve the prediction effect by about 25% in the case of small samples, and the MAERL-based optimization method can better deal with the coupling problem of reward function in the optimization process. Compared with the classical optimization method, the optimization effect is improved by 13% and a lot of optimization time is saved.} 912 912 abstract = {Improving machining quality and production efficiency is the focus of the manufacturing industry. How to obtain efficient machining parameters under multiple constraints such as machining quality is a severe challenge for manufacturing industry. In this paper, a multi-agent evolutionary reinforcement learning method (MAERL) is proposed to optimize the machining parameters for high quality and high efficiency machining by combining the graph neural network and reinforcement learning. Firstly, a bootstrap aggregating graph attention network (Bagging-GAT) based roughness estimation method for machined surface is proposed, which combines the structural knowledge between machining parameters and vibration features. Secondly, a mathematical model of machining parameters optimization problem is established, which is formalized into Markov decision process (MDP), and a multi-agent reinforcement learning method is proposed to solve the MDP problem, and evolutionary learning is introduced to improve the stability of multi-agent training. Finally, a series of experiments were carried out on the commutator production line, and the results show that the proposed Bagging-GAT-based method can improve the prediction effect by about 25% in the case of small samples, and the MAERL-based optimization method can better deal with the coupling problem of reward function in the optimization process. Compared with the classical optimization method, the optimization effect is improved by 13% and a lot of optimization time is saved.}
} 913 913 }
914 914
@inproceedings{10.1145/3290605.3300912, 915 915 @inproceedings{10.1145/3290605.3300912,
author = {Kim, Yea-Seul and Walls, Logan A. and Krafft, Peter and Hullman, Jessica}, 916 916 author = {Kim, Yea-Seul and Walls, Logan A. and Krafft, Peter and Hullman, Jessica},
title = {A Bayesian Cognition Approach to Improve Data Visualization}, 917 917 title = {A Bayesian Cognition Approach to Improve Data Visualization},
year = {2019}, 918 918 year = {2019},
isbn = {9781450359702}, 919 919 isbn = {9781450359702},
publisher = {Association for Computing Machinery}, 920 920 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 921 921 address = {New York, NY, USA},
url = {https://doi.org/10.1145/3290605.3300912}, 922 922 url = {https://doi.org/10.1145/3290605.3300912},
doi = {10.1145/3290605.3300912}, 923 923 doi = {10.1145/3290605.3300912},
abstract = {People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.}, 924 924 abstract = {People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems}, 925 925 booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
pages = {1–14}, 926 926 pages = {1–14},
numpages = {14}, 927 927 numpages = {14},
keywords = {bayesian cognition, uncertainty elicitation, visualization}, 928 928 keywords = {bayesian cognition, uncertainty elicitation, visualization},
location = {Glasgow, Scotland Uk}, 929 929 location = {Glasgow, Scotland Uk},
series = {CHI '19} 930 930 series = {CHI '19}
} 931 931 }
932 932
@article{DYER2024104827, 933 933 @article{DYER2024104827,
title = {Black-box Bayesian inference for agent-based models}, 934 934 title = {Black-box Bayesian inference for agent-based models},
journal = {Journal of Economic Dynamics and Control}, 935 935 journal = {Journal of Economic Dynamics and Control},
volume = {161}, 936 936 volume = {161},
pages = {104827}, 937 937 pages = {104827},
year = {2024}, 938 938 year = {2024},
issn = {0165-1889}, 939 939 issn = {0165-1889},
doi = {https://doi.org/10.1016/j.jedc.2024.104827}, 940 940 doi = {https://doi.org/10.1016/j.jedc.2024.104827},
url = {https://www.sciencedirect.com/science/article/pii/S0165188924000198}, 941 941 url = {https://www.sciencedirect.com/science/article/pii/S0165188924000198},
author = {Joel Dyer and Patrick Cannon and J. Doyne Farmer and Sebastian M. Schmon}, 942 942 author = {Joel Dyer and Patrick Cannon and J. Doyne Farmer and Sebastian M. Schmon},
keywords = {Agent-based models, Bayesian inference, Neural networks, Parameter estimation, Simulation-based inference, Time series}, 943 943 keywords = {Agent-based models, Bayesian inference, Neural networks, Parameter estimation, Simulation-based inference, Time series},
abstract = {Simulation models, in particular agent-based models, are gaining popularity in economics and the social sciences. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviours of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in real-world modelling and decision-making scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. A number of recent works have sought to address this problem through the application of likelihood-free inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for large-scale simulations in economics and the social sciences, and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of simulation-efficient black-box approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present a number of benchmarking experiments in which we demonstrate that neural network-based black-box methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate or even non-Euclidean time-series data. In addition, we suggest appropriate assessment criteria for use in future benchmarking of approximate Bayesian inference procedures for simulation models in economics and the social sciences.} 944 944 abstract = {Simulation models, in particular agent-based models, are gaining popularity in economics and the social sciences. The considerable flexibility they offer, as well as their capacity to reproduce a variety of empirically observed behaviours of complex systems, give them broad appeal, and the increasing availability of cheap computing power has made their use feasible. Yet a widespread adoption in real-world modelling and decision-making scenarios has been hindered by the difficulty of performing parameter estimation for such models. In general, simulation models lack a tractable likelihood function, which precludes a straightforward application of standard statistical inference techniques. A number of recent works have sought to address this problem through the application of likelihood-free inference techniques, in which parameter estimates are determined by performing some form of comparison between the observed data and simulation output. However, these approaches are (a) founded on restrictive assumptions, and/or (b) typically require many hundreds of thousands of simulations. These qualities make them unsuitable for large-scale simulations in economics and the social sciences, and can cast doubt on the validity of these inference methods in such scenarios. In this paper, we investigate the efficacy of two classes of simulation-efficient black-box approximate Bayesian inference methods that have recently drawn significant attention within the probabilistic machine learning community: neural posterior estimation and neural density ratio estimation. We present a number of benchmarking experiments in which we demonstrate that neural network-based black-box methods provide state of the art parameter inference for economic simulation models, and crucially are compatible with generic multivariate or even non-Euclidean time-series data. In addition, we suggest appropriate assessment criteria for use in future benchmarking of approximate Bayesian inference procedures for simulation models in economics and the social sciences.}
} 945 945 }
946 946
@Article{Nikpour2021, 947 947 @Article{Nikpour2021,
author={Nikpour, Hoda 948 948 author={Nikpour, Hoda
and Aamodt, Agnar}, 949 949 and Aamodt, Agnar},
title={Inference and reasoning in a Bayesian knowledge-intensive CBR system}, 950 950 title={Inference and reasoning in a Bayesian knowledge-intensive CBR system},
journal={Progress in Artificial Intelligence}, 951 951 journal={Progress in Artificial Intelligence},
year={2021}, 952 952 year={2021},
month={Mar}, 953 953 month={Mar},
day={01}, 954 954 day={01},
volume={10}, 955 955 volume={10},
number={1}, 956 956 number={1},
pages={49-63}, 957 957 pages={49-63},
abstract={This paper presents the inference and reasoning methods in a Bayesian supported knowledge-intensive case-based reasoning (CBR) system called BNCreek. The inference and reasoning process in this system is a combination of three methods. The semantic network inference methods and the CBR method are employed to handle the difficulties of inferencing and reasoning in uncertain domains. The Bayesian network inference methods are employed to make the process more accurate. An experiment from oil well drilling as a complex and uncertain application domain is conducted. The system is evaluated against expert estimations and compared with seven other corresponding systems. The normalized discounted cumulative gain (NDCG) as a rank-based metric, the weighted error (WE), and root-square error (RSE) as the statistical metrics are employed to evaluate different aspects of the system capabilities. The results show the efficiency of the developed inference and reasoning methods.}, 958 958 abstract={This paper presents the inference and reasoning methods in a Bayesian supported knowledge-intensive case-based reasoning (CBR) system called BNCreek. The inference and reasoning process in this system is a combination of three methods. The semantic network inference methods and the CBR method are employed to handle the difficulties of inferencing and reasoning in uncertain domains. The Bayesian network inference methods are employed to make the process more accurate. An experiment from oil well drilling as a complex and uncertain application domain is conducted. The system is evaluated against expert estimations and compared with seven other corresponding systems. The normalized discounted cumulative gain (NDCG) as a rank-based metric, the weighted error (WE), and root-square error (RSE) as the statistical metrics are employed to evaluate different aspects of the system capabilities. The results show the efficiency of the developed inference and reasoning methods.},
issn={2192-6360}, 959 959 issn={2192-6360},
doi={10.1007/s13748-020-00223-1}, 960 960 doi={10.1007/s13748-020-00223-1},
url={https://doi.org/10.1007/s13748-020-00223-1} 961 961 url={https://doi.org/10.1007/s13748-020-00223-1}
} 962 962 }
963 963
@article{PRESCOTT2024112577, 964 964 @article{PRESCOTT2024112577,
title = {Efficient multifidelity likelihood-free Bayesian inference with adaptive computational resource allocation}, 965 965 title = {Efficient multifidelity likelihood-free Bayesian inference with adaptive computational resource allocation},
journal = {Journal of Computational Physics}, 966 966 journal = {Journal of Computational Physics},
volume = {496}, 967 967 volume = {496},
pages = {112577}, 968 968 pages = {112577},
year = {2024}, 969 969 year = {2024},
issn = {0021-9991}, 970 970 issn = {0021-9991},
doi = {https://doi.org/10.1016/j.jcp.2023.112577}, 971 971 doi = {https://doi.org/10.1016/j.jcp.2023.112577},
url = {https://www.sciencedirect.com/science/article/pii/S0021999123006721}, 972 972 url = {https://www.sciencedirect.com/science/article/pii/S0021999123006721},
author = {Thomas P. Prescott and David J. Warne and Ruth E. Baker}, 973 973 author = {Thomas P. Prescott and David J. Warne and Ruth E. Baker},
keywords = {Likelihood-free Bayesian inference, Multifidelity approaches}, 974 974 keywords = {Likelihood-free Bayesian inference, Multifidelity approaches},
abstract = {Likelihood-free Bayesian inference algorithms are popular methods for inferring the parameters of complex stochastic models with intractable likelihoods. These algorithms characteristically rely heavily on repeated model simulations. However, whenever the computational cost of simulation is even moderately expensive, the significant burden incurred by likelihood-free algorithms leaves them infeasible for many practical applications. The multifidelity approach has been introduced in the context of approximate Bayesian computation to reduce the simulation burden of likelihood-free inference without loss of accuracy, by using the information provided by simulating computationally cheap, approximate models in place of the model of interest. In this work we demonstrate that multifidelity techniques can be applied in the general likelihood-free Bayesian inference setting. Analytical results on the optimal allocation of computational resources to simulations at different levels of fidelity are derived, and subsequently implemented practically. We provide an adaptive multifidelity likelihood-free inference algorithm that learns the relationships between models at different fidelities and adapts resource allocation accordingly, and demonstrate that this algorithm produces posterior estimates with near-optimal efficiency.} 975 975 abstract = {Likelihood-free Bayesian inference algorithms are popular methods for inferring the parameters of complex stochastic models with intractable likelihoods. These algorithms characteristically rely heavily on repeated model simulations. However, whenever the computational cost of simulation is even moderately expensive, the significant burden incurred by likelihood-free algorithms leaves them infeasible for many practical applications. The multifidelity approach has been introduced in the context of approximate Bayesian computation to reduce the simulation burden of likelihood-free inference without loss of accuracy, by using the information provided by simulating computationally cheap, approximate models in place of the model of interest. In this work we demonstrate that multifidelity techniques can be applied in the general likelihood-free Bayesian inference setting. Analytical results on the optimal allocation of computational resources to simulations at different levels of fidelity are derived, and subsequently implemented practically. We provide an adaptive multifidelity likelihood-free inference algorithm that learns the relationships between models at different fidelities and adapts resource allocation accordingly, and demonstrate that this algorithm produces posterior estimates with near-optimal efficiency.}
} 976 976 }
977 977
@article{RISTIC202030, 978 978 @article{RISTIC202030,
title = {A tutorial on uncertainty modeling for machine reasoning}, 979 979 title = {A tutorial on uncertainty modeling for machine reasoning},
journal = {Information Fusion}, 980 980 journal = {Information Fusion},
volume = {55}, 981 981 volume = {55},
pages = {30-44}, 982 982 pages = {30-44},
year = {2020}, 983 983 year = {2020},
issn = {1566-2535}, 984 984 issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2019.08.001}, 985 985 doi = {https://doi.org/10.1016/j.inffus.2019.08.001},
url = {https://www.sciencedirect.com/science/article/pii/S1566253519301976}, 986 986 url = {https://www.sciencedirect.com/science/article/pii/S1566253519301976},
author = {Branko Ristic and Christopher Gilliam and Marion Byrne and Alessio Benavoli}, 987 987 author = {Branko Ristic and Christopher Gilliam and Marion Byrne and Alessio Benavoli},
keywords = {Information fusion, Uncertainty, Imprecision, Model based classification, Bayesian, Random sets, Belief function theory, Possibility functions, Imprecise probability}, 988 988 keywords = {Information fusion, Uncertainty, Imprecision, Model based classification, Bayesian, Random sets, Belief function theory, Possibility functions, Imprecise probability},
abstract = {Increasingly we rely on machine intelligence for reasoning and decision making under uncertainty. This tutorial reviews the prevalent methods for model-based autonomous decision making based on observations and prior knowledge, primarily in the context of classification. Both observations and the knowledge-base available for reasoning are treated as being uncertain. Accordingly, the central themes of this tutorial are quantitative modeling of uncertainty, the rules required to combine such uncertain information, and the task of decision making under uncertainty. The paper covers the main approaches to uncertain knowledge representation and reasoning, in particular, Bayesian probability theory, possibility theory, reasoning based on belief functions and finally imprecise probability theory. The main feature of the tutorial is that it illustrates various approaches with several testing scenarios, and provides MATLAB solutions for them as a supplementary material for an interested reader.} 989 989 abstract = {Increasingly we rely on machine intelligence for reasoning and decision making under uncertainty. This tutorial reviews the prevalent methods for model-based autonomous decision making based on observations and prior knowledge, primarily in the context of classification. Both observations and the knowledge-base available for reasoning are treated as being uncertain. Accordingly, the central themes of this tutorial are quantitative modeling of uncertainty, the rules required to combine such uncertain information, and the task of decision making under uncertainty. The paper covers the main approaches to uncertain knowledge representation and reasoning, in particular, Bayesian probability theory, possibility theory, reasoning based on belief functions and finally imprecise probability theory. The main feature of the tutorial is that it illustrates various approaches with several testing scenarios, and provides MATLAB solutions for them as a supplementary material for an interested reader.}
} 990 990 }
991 991
@article{CICIRELLO2022108619, 992 992 @article{CICIRELLO2022108619,
title = {Machine learning based optimization for interval uncertainty propagation}, 993 993 title = {Machine learning based optimization for interval uncertainty propagation},
journal = {Mechanical Systems and Signal Processing}, 994 994 journal = {Mechanical Systems and Signal Processing},
volume = {170}, 995 995 volume = {170},
pages = {108619}, 996 996 pages = {108619},
year = {2022}, 997 997 year = {2022},
issn = {0888-3270}, 998 998 issn = {0888-3270},
doi = {https://doi.org/10.1016/j.ymssp.2021.108619}, 999 999 doi = {https://doi.org/10.1016/j.ymssp.2021.108619},
url = {https://www.sciencedirect.com/science/article/pii/S0888327021009493}, 1000 1000 url = {https://www.sciencedirect.com/science/article/pii/S0888327021009493},
author = {Alice Cicirello and Filippo Giunta}, 1001 1001 author = {Alice Cicirello and Filippo Giunta},
keywords = {Bounded uncertainty, Bayesian optimization, Expensive-to-evaluate deterministic computer models, Gaussian process, Communicating uncertainty}, 1002 1002 keywords = {Bounded uncertainty, Bayesian optimization, Expensive-to-evaluate deterministic computer models, Gaussian process, Communicating uncertainty},
abstract = {Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems described by expensive-to-evaluate deterministic computer models with parameters defined as interval variables. These approaches employ a machine learning based optimization strategy, the so-called Bayesian optimization, for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The lack of knowledge caused by not evaluating the response function for all the possible combinations of the interval variables is accounted for by developing a probabilistic description of the response variable itself by using a Gaussian Process regression model. An iterative procedure is developed for selecting a small number of simulations to be evaluated for updating this statistical model by using well-established acquisition functions and to assess the response bounds. In both approaches, an initial training dataset is defined. While one approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable, the other one builds iteratively a single training dataset. Consequently, the two approaches will produce different bound estimates at each iteration. The upper and lower response bounds are expressed as point estimates obtained from the mean function of the posterior distribution. Moreover, a confidence interval on each estimate is provided for effectively communicating to engineers when these estimates are obtained at a combination of the interval variables for which no deterministic simulation has been run. Finally, two metrics are proposed to define conditions for assessing if the predicted bound estimates can be considered satisfactory. The applicability of these two approaches is illustrated with two numerical applications, one focusing on vibration and the other on vibro-acoustics.} 1003 1003 abstract = {Two non-intrusive uncertainty propagation approaches are proposed for the performance analysis of engineering systems described by expensive-to-evaluate deterministic computer models with parameters defined as interval variables. These approaches employ a machine learning based optimization strategy, the so-called Bayesian optimization, for evaluating the upper and lower bounds of a generic response variable over the set of possible responses obtained when each interval variable varies independently over its range. The lack of knowledge caused by not evaluating the response function for all the possible combinations of the interval variables is accounted for by developing a probabilistic description of the response variable itself by using a Gaussian Process regression model. An iterative procedure is developed for selecting a small number of simulations to be evaluated for updating this statistical model by using well-established acquisition functions and to assess the response bounds. In both approaches, an initial training dataset is defined. While one approach builds iteratively two distinct training datasets for evaluating separately the upper and lower bounds of the response variable, the other one builds iteratively a single training dataset. Consequently, the two approaches will produce different bound estimates at each iteration. The upper and lower response bounds are expressed as point estimates obtained from the mean function of the posterior distribution. Moreover, a confidence interval on each estimate is provided for effectively communicating to engineers when these estimates are obtained at a combination of the interval variables for which no deterministic simulation has been run. Finally, two metrics are proposed to define conditions for assessing if the predicted bound estimates can be considered satisfactory. The applicability of these two approaches is illustrated with two numerical applications, one focusing on vibration and the other on vibro-acoustics.}
} 1004 1004 }
1005 1005
@INPROCEEDINGS{9278071, 1006 1006 @INPROCEEDINGS{9278071,
author={Petit, Maxime and Dellandrea, Emmanuel and Chen, Liming}, 1007 1007 author={Petit, Maxime and Dellandrea, Emmanuel and Chen, Liming},
booktitle={2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)}, 1008 1008 booktitle={2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)},
title={Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction}, 1009 1009 title={Bayesian Optimization for Developmental Robotics with Meta-Learning by Parameters Bounds Reduction},
year={2020}, 1010 1010 year={2020},
volume={}, 1011 1011 volume={},
number={}, 1012 1012 number={},
pages={1-8}, 1013 1013 pages={1-8},
keywords={Optimization;Robots;Task analysis;Bayes methods;Visualization;Service robots;Cognition;developmental robotics;long-term memory;meta learning;hyperparmeters automatic optimization;case-based reasoning}, 1014 1014 keywords={Optimization;Robots;Task analysis;Bayes methods;Visualization;Service robots;Cognition;developmental robotics;long-term memory;meta learning;hyperparmeters automatic optimization;case-based reasoning},
doi={10.1109/ICDL-EpiRob48136.2020.9278071} 1015 1015 doi={10.1109/ICDL-EpiRob48136.2020.9278071}
} 1016 1016 }
1017 1017
@article{LI2023477, 1018 1018 @article{LI2023477,
title = {Hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network based on improved KMeans partition method}, 1019 1019 title = {Hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network based on improved KMeans partition method},
journal = {Energy Reports}, 1020 1020 journal = {Energy Reports},
volume = {9}, 1021 1021 volume = {9},
pages = {477-485}, 1022 1022 pages = {477-485},
year = {2023}, 1023 1023 year = {2023},
note = {2022 The 3rd International Conference on Power and Electrical Engineering}, 1024 1024 note = {2022 The 3rd International Conference on Power and Electrical Engineering},
issn = {2352-4847}, 1025 1025 issn = {2352-4847},
doi = {https://doi.org/10.1016/j.egyr.2023.05.161}, 1026 1026 doi = {https://doi.org/10.1016/j.egyr.2023.05.161},
url = {https://www.sciencedirect.com/science/article/pii/S2352484723009137}, 1027 1027 url = {https://www.sciencedirect.com/science/article/pii/S2352484723009137},
author = {Jingqi Li and Junlin Li and Dan Wang and Chengxiong Mao and Zhitao Guan and Zhichao Liu and Miaomiao Du and Yuanzhuo Qi and Lexiang Wang and Wenge Liu and Pengfei Tang}, 1028 1028 author = {Jingqi Li and Junlin Li and Dan Wang and Chengxiong Mao and Zhitao Guan and Zhichao Liu and Miaomiao Du and Yuanzhuo Qi and Lexiang Wang and Wenge Liu and Pengfei Tang},
keywords = {Closed-loop device, Distribution network partition, Device planning, Hierarchical planning, Improved KMeans partition method}, 1029 1029 keywords = {Closed-loop device, Distribution network partition, Device planning, Hierarchical planning, Improved KMeans partition method},
abstract = {To improve the reliability of power supply, this paper proposes a hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network. Based on the geographic location and load situation of the distribution network area, an improved KMeans partition method is used to partition the area in the upper layer. In the lower layer, an intelligent algorithm is adopted to decide the numbers and placement locations of mobile low-voltage contact boxes and mobile seamless closed-loop load transfer devices in each partition with the goal of the highest closed-loop safety, the greatest improvement in annual power outage amount and the lowest cost. Finally, the feasibility and effectiveness of the proposed strategy are proved by an example.} 1030 1030 abstract = {To improve the reliability of power supply, this paper proposes a hierarchical and partitioned planning strategy for closed-loop devices in low-voltage distribution network. Based on the geographic location and load situation of the distribution network area, an improved KMeans partition method is used to partition the area in the upper layer. In the lower layer, an intelligent algorithm is adopted to decide the numbers and placement locations of mobile low-voltage contact boxes and mobile seamless closed-loop load transfer devices in each partition with the goal of the highest closed-loop safety, the greatest improvement in annual power outage amount and the lowest cost. Finally, the feasibility and effectiveness of the proposed strategy are proved by an example.}
} 1031 1031 }
1032 1032
@article{SAXENA2024100838, 1033 1033 @article{SAXENA2024100838,
title = {Hybrid KNN-SVM machine learning approach for solar power forecasting}, 1034 1034 title = {Hybrid KNN-SVM machine learning approach for solar power forecasting},
journal = {Environmental Challenges}, 1035 1035 journal = {Environmental Challenges},
volume = {14}, 1036 1036 volume = {14},
pages = {100838}, 1037 1037 pages = {100838},
year = {2024}, 1038 1038 year = {2024},
issn = {2667-0100}, 1039 1039 issn = {2667-0100},
doi = {https://doi.org/10.1016/j.envc.2024.100838}, 1040 1040 doi = {https://doi.org/10.1016/j.envc.2024.100838},
url = {https://www.sciencedirect.com/science/article/pii/S2667010024000040}, 1041 1041 url = {https://www.sciencedirect.com/science/article/pii/S2667010024000040},
author = {Nishant Saxena and Rahul Kumar and Yarrapragada K S S Rao and Dilbag Singh Mondloe and Nishikant Kishor Dhapekar and Abhishek Sharma and Anil Singh Yadav}, 1042 1042 author = {Nishant Saxena and Rahul Kumar and Yarrapragada K S S Rao and Dilbag Singh Mondloe and Nishikant Kishor Dhapekar and Abhishek Sharma and Anil Singh Yadav},
keywords = {Solar power forecasting, Hybrid model, KNN, Optimization, Solar energy, SVM}, 1043 1043 keywords = {Solar power forecasting, Hybrid model, KNN, Optimization, Solar energy, SVM},
abstract = {Predictions about solar power will have a significant impact on large-scale renewable energy plants. Photovoltaic (PV) power generation forecasting is particularly sensitive to measuring the uncertainty in weather conditions. Although several conventional techniques like long short-term memory (LSTM), support vector machine (SVM), etc. are available, but due to some restrictions, their application is limited. To enhance the precision of forecasting solar power from solar farms, a hybrid machine learning model that includes blends of the K-Nearest Neighbor (KNN) machine learning technique with the SVM to increase reliability for power system operators is proposed in this investigation. The conventional LSTM technique is also implemented to compare the performance of the proposed hybrid technique. The suggested hybrid model is improved by the use of structural diversity and data diversity in KNN and SVM, respectively. For the solar power predictions, the suggested method was tested on the Jodhpur real-time series dataset obtained from the data centers of weather stations using Meteonorm. The data set includes metrics such as Hourly Average Temperature (HAT), Hourly Total Sunlight Duration (HTSD), Hourly Total Global Solar Radiation (HTGSR), and Hourly Total Photovoltaic Energy Generation (HTPEG). The collated data has been segmented into training data, validation data, and testing data. Furthermore, the proposed technique performed better when evaluated on the three performance indices, viz., accuracy, sensitivity, and specificity. Compared with the conventional LSTM technique, the hybrid technique improved the prediction with 98\% accuracy.} 1044 1044 abstract = {Predictions about solar power will have a significant impact on large-scale renewable energy plants. Photovoltaic (PV) power generation forecasting is particularly sensitive to measuring the uncertainty in weather conditions. Although several conventional techniques like long short-term memory (LSTM), support vector machine (SVM), etc. are available, but due to some restrictions, their application is limited. To enhance the precision of forecasting solar power from solar farms, a hybrid machine learning model that includes blends of the K-Nearest Neighbor (KNN) machine learning technique with the SVM to increase reliability for power system operators is proposed in this investigation. The conventional LSTM technique is also implemented to compare the performance of the proposed hybrid technique. The suggested hybrid model is improved by the use of structural diversity and data diversity in KNN and SVM, respectively. For the solar power predictions, the suggested method was tested on the Jodhpur real-time series dataset obtained from the data centers of weather stations using Meteonorm. The data set includes metrics such as Hourly Average Temperature (HAT), Hourly Total Sunlight Duration (HTSD), Hourly Total Global Solar Radiation (HTGSR), and Hourly Total Photovoltaic Energy Generation (HTPEG). The collated data has been segmented into training data, validation data, and testing data. Furthermore, the proposed technique performed better when evaluated on the three performance indices, viz., accuracy, sensitivity, and specificity. Compared with the conventional LSTM technique, the hybrid technique improved the prediction with 98\% accuracy.}
} 1045 1045 }
1046 1046
@article{RAKESH2023100898, 1047 1047 @article{RAKESH2023100898,
title = {Moving object detection using modified GMM based background subtraction}, 1048 1048 title = {Moving object detection using modified GMM based background subtraction},
journal = {Measurement: Sensors}, 1049 1049 journal = {Measurement: Sensors},
volume = {30}, 1050 1050 volume = {30},
pages = {100898}, 1051 1051 pages = {100898},
year = {2023}, 1052 1052 year = {2023},
issn = {2665-9174}, 1053 1053 issn = {2665-9174},
doi = {https://doi.org/10.1016/j.measen.2023.100898}, 1054 1054 doi = {https://doi.org/10.1016/j.measen.2023.100898},
url = {https://www.sciencedirect.com/science/article/pii/S2665917423002349}, 1055 1055 url = {https://www.sciencedirect.com/science/article/pii/S2665917423002349},
author = {S. Rakesh and Nagaratna P. Hegde and M. {Venu Gopalachari} and D. Jayaram and Bhukya Madhu and Mohd Abdul Hameed and Ramdas Vankdothu and L.K. {Suresh Kumar}}, 1056 1056 author = {S. Rakesh and Nagaratna P. Hegde and M. {Venu Gopalachari} and D. Jayaram and Bhukya Madhu and Mohd Abdul Hameed and Ramdas Vankdothu and L.K. {Suresh Kumar}},
keywords = {Background subtraction, Gaussian mixture models, Intelligent video surveillance, Object detection}, 1057 1057 keywords = {Background subtraction, Gaussian mixture models, Intelligent video surveillance, Object detection},
abstract = {Academics have become increasingly interested in creating cutting-edge technologies to enhance Intelligent Video Surveillance (IVS) performance in terms of accuracy, speed, complexity, and deployment. It has been noted that precise object detection is the only way for IVS to function well in higher level applications including event interpretation, tracking, classification, and activity recognition. Through the use of cutting-edge techniques, the current study seeks to improve the performance accuracy of object detection techniques based on Gaussian Mixture Models (GMM). It is achieved by developing crucial phases in the object detecting process. In this study, it is discussed how to model each pixel as a mixture of Gaussians and how to update the model using an online k-means approximation. The adaptive mixture model's Gaussian distributions are then analyzed to identify which ones are more likely to be the product of a background process. Each pixel is categorized according to whether the background model is thought to include the Gaussian distribution that best depicts it.} 1058 1058 abstract = {Academics have become increasingly interested in creating cutting-edge technologies to enhance Intelligent Video Surveillance (IVS) performance in terms of accuracy, speed, complexity, and deployment. It has been noted that precise object detection is the only way for IVS to function well in higher level applications including event interpretation, tracking, classification, and activity recognition. Through the use of cutting-edge techniques, the current study seeks to improve the performance accuracy of object detection techniques based on Gaussian Mixture Models (GMM). It is achieved by developing crucial phases in the object detecting process. In this study, it is discussed how to model each pixel as a mixture of Gaussians and how to update the model using an online k-means approximation. The adaptive mixture model's Gaussian distributions are then analyzed to identify which ones are more likely to be the product of a background process. Each pixel is categorized according to whether the background model is thought to include the Gaussian distribution that best depicts it.}
} 1059 1059 }
1060 1060
@article{JIAO2022540, 1061 1061 @article{JIAO2022540,
title = {Interpretable fuzzy clustering using unsupervised fuzzy decision trees}, 1062 1062 title = {Interpretable fuzzy clustering using unsupervised fuzzy decision trees},
journal = {Information Sciences}, 1063 1063 journal = {Information Sciences},
volume = {611}, 1064 1064 volume = {611},
pages = {540-563}, 1065 1065 pages = {540-563},
year = {2022}, 1066 1066 year = {2022},
issn = {0020-0255}, 1067 1067 issn = {0020-0255},
doi = {https://doi.org/10.1016/j.ins.2022.08.077}, 1068 1068 doi = {https://doi.org/10.1016/j.ins.2022.08.077},
url = {https://www.sciencedirect.com/science/article/pii/S0020025522009872}, 1069 1069 url = {https://www.sciencedirect.com/science/article/pii/S0020025522009872},
author = {Lianmeng Jiao and Haoyu Yang and Zhun-ga Liu and Quan Pan}, 1070 1070 author = {Lianmeng Jiao and Haoyu Yang and Zhun-ga Liu and Quan Pan},
keywords = {Fuzzy clustering, Interpretable clustering, Unsupervised decision tree, Cluster merging}, 1071 1071 keywords = {Fuzzy clustering, Interpretable clustering, Unsupervised decision tree, Cluster merging},
abstract = {In clustering process, fuzzy partition performs better than hard partition when the boundaries between clusters are vague. Whereas, traditional fuzzy clustering algorithms produce less interpretable results, limiting their application in security, privacy, and ethics fields. To that end, this paper proposes an interpretable fuzzy clustering algorithm—fuzzy decision tree-based clustering which combines the flexibility of fuzzy partition with the interpretability of the decision tree. We constructed an unsupervised multi-way fuzzy decision tree to achieve the interpretability of clustering, in which each cluster is determined by one or several paths from the root to leaf nodes. The proposed algorithm comprises three main modules: feature and cutting point-selection, node fuzzy splitting, and cluster merging. The first two modules are repeated to generate an initial unsupervised decision tree, and the final module is designed to combine similar leaf nodes to form the final compact clustering model. Our algorithm optimizes an internal clustering validation metric to automatically determine the number of clusters without their initial positions. The synthetic and benchmark datasets were used to test the performance of the proposed algorithm. Furthermore, we provided two examples demonstrating its interest in solving practical problems.} 1072 1072 abstract = {In clustering process, fuzzy partition performs better than hard partition when the boundaries between clusters are vague. Whereas, traditional fuzzy clustering algorithms produce less interpretable results, limiting their application in security, privacy, and ethics fields. To that end, this paper proposes an interpretable fuzzy clustering algorithm—fuzzy decision tree-based clustering which combines the flexibility of fuzzy partition with the interpretability of the decision tree. We constructed an unsupervised multi-way fuzzy decision tree to achieve the interpretability of clustering, in which each cluster is determined by one or several paths from the root to leaf nodes. The proposed algorithm comprises three main modules: feature and cutting point-selection, node fuzzy splitting, and cluster merging. The first two modules are repeated to generate an initial unsupervised decision tree, and the final module is designed to combine similar leaf nodes to form the final compact clustering model. Our algorithm optimizes an internal clustering validation metric to automatically determine the number of clusters without their initial positions. The synthetic and benchmark datasets were used to test the performance of the proposed algorithm. Furthermore, we provided two examples demonstrating its interest in solving practical problems.}
} 1073 1073 }
1074 1074
@article{ARNAUGONZALEZ2023101516, 1075 1075 @article{ARNAUGONZALEZ2023101516,
title = {A methodological approach to enable natural language interaction in an Intelligent Tutoring System}, 1076 1076 title = {A methodological approach to enable natural language interaction in an Intelligent Tutoring System},
journal = {Computer Speech and Language}, 1077 1077 journal = {Computer Speech and Language},
volume = {81}, 1078 1078 volume = {81},
pages = {101516}, 1079 1079 pages = {101516},
year = {2023}, 1080 1080 year = {2023},
issn = {0885-2308}, 1081 1081 issn = {0885-2308},
doi = {https://doi.org/10.1016/j.csl.2023.101516}, 1082 1082 doi = {https://doi.org/10.1016/j.csl.2023.101516},
url = {https://www.sciencedirect.com/science/article/pii/S0885230823000359}, 1083 1083 url = {https://www.sciencedirect.com/science/article/pii/S0885230823000359},
author = {Pablo Arnau-González and Miguel Arevalillo-Herráez and Romina Albornoz-De Luise and David Arnau}, 1084 1084 author = {Pablo Arnau-González and Miguel Arevalillo-Herráez and Romina Albornoz-De Luise and David Arnau},
keywords = {Intelligent tutoring systems (ITS), Interactive learning environments (ILE), Conversational agents, Rasa, Natural language understanding (NLU), Natural language processing (NLP)}, 1085 1085 keywords = {Intelligent tutoring systems (ITS), Interactive learning environments (ILE), Conversational agents, Rasa, Natural language understanding (NLU), Natural language processing (NLP)},
abstract = {In this paper, we present and evaluate the recent incorporation of a conversational agent into an Intelligent Tutoring System (ITS), using the open-source machine learning framework Rasa. Once it has been appropriately trained, this tool is capable of identifying the intention of a given text input and extracting the relevant entities related to the message content. We describe both the generation of a realistic training set in Spanish language that enables the creation of the required Natural Language Understanding (NLU) models and the evaluation of the resulting system. For the generation of the training set, we have followed a methodology that can be easily exported to other ITS. The model evaluation shows that the conversational agent can correctly identify the majority of the user intents, reporting an f1-score above 95%, and cooperate with the ITS to produce a consistent dialogue flow that makes interaction more natural.} 1086 1086 abstract = {In this paper, we present and evaluate the recent incorporation of a conversational agent into an Intelligent Tutoring System (ITS), using the open-source machine learning framework Rasa. Once it has been appropriately trained, this tool is capable of identifying the intention of a given text input and extracting the relevant entities related to the message content. We describe both the generation of a realistic training set in Spanish language that enables the creation of the required Natural Language Understanding (NLU) models and the evaluation of the resulting system. For the generation of the training set, we have followed a methodology that can be easily exported to other ITS. The model evaluation shows that the conversational agent can correctly identify the majority of the user intents, reporting an f1-score above 95%, and cooperate with the ITS to produce a consistent dialogue flow that makes interaction more natural.}
} 1087 1087 }
1088 1088
@article{MAO20224065, 1089 1089 @article{MAO20224065,
title = {An Exploratory Approach to Intelligent Quiz Question Recommendation}, 1090 1090 title = {An Exploratory Approach to Intelligent Quiz Question Recommendation},
journal = {Procedia Computer Science}, 1091 1091 journal = {Procedia Computer Science},
volume = {207}, 1092 1092 volume = {207},
pages = {4065-4074}, 1093 1093 pages = {4065-4074},
year = {2022}, 1094 1094 year = {2022},
note = {Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 26th International Conference KES2022}, 1095 1095 note = {Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 26th International Conference KES2022},
issn = {1877-0509}, 1096 1096 issn = {1877-0509},
doi = {https://doi.org/10.1016/j.procs.2022.09.469}, 1097 1097 doi = {https://doi.org/10.1016/j.procs.2022.09.469},
url = {https://www.sciencedirect.com/science/article/pii/S1877050922013631}, 1098 1098 url = {https://www.sciencedirect.com/science/article/pii/S1877050922013631},
author = {Kejie Mao and Qiwen Dong and Ye Wang and Daocheng Honga}, 1099 1099 author = {Kejie Mao and Qiwen Dong and Ye Wang and Daocheng Honga},
keywords = {question recommendation, two-sided recommender systems, reinforcement learning, intelligent tutoring}, 1100 1100 keywords = {question recommendation, two-sided recommender systems, reinforcement learning, intelligent tutoring},
abstract = {With the rapid advancement of ICT, the digital transformation on education is greatly accelerating in various applications. As a particularly prominent application of digital education, quiz question recommendation is playing a vital role in precision teaching, smart tutoring, and personalized learning. However, the looming challenge of quiz question recommender for students is to satisfy the question diversity demands for students ZPD (the zone of proximal development) stage dynamically online. Therefore, we propose to formalize quiz question recommendation with a novel approach of reinforcement learning based two-sided recommender system. We develop a recommendation framework RTR (Reinforcement-Learning based Two-sided Recommender Systems) for taking into account the interests of both sides of the system, learning and adapting to those interests in real time, and resulting in more satisfactory recommended content. This established recommendation framework captures question characters and student dynamic preferences by considering the emergence of both sides of the system, and it yields a better learning experience in the context of practical quiz question generation.} 1101 1101 abstract = {With the rapid advancement of ICT, the digital transformation on education is greatly accelerating in various applications. As a particularly prominent application of digital education, quiz question recommendation is playing a vital role in precision teaching, smart tutoring, and personalized learning. However, the looming challenge of quiz question recommender for students is to satisfy the question diversity demands for students ZPD (the zone of proximal development) stage dynamically online. Therefore, we propose to formalize quiz question recommendation with a novel approach of reinforcement learning based two-sided recommender system. We develop a recommendation framework RTR (Reinforcement-Learning based Two-sided Recommender Systems) for taking into account the interests of both sides of the system, learning and adapting to those interests in real time, and resulting in more satisfactory recommended content. This established recommendation framework captures question characters and student dynamic preferences by considering the emergence of both sides of the system, and it yields a better learning experience in the context of practical quiz question generation.}
} 1102 1102 }
1103 1103
@article{CLEMENTE2022118171, 1104 1104 @article{CLEMENTE2022118171,
title = {A proposal for an adaptive Recommender System based on competences and ontologies}, 1105 1105 title = {A proposal for an adaptive Recommender System based on competences and ontologies},
journal = {Expert Systems with Applications}, 1106 1106 journal = {Expert Systems with Applications},
volume = {208}, 1107 1107 volume = {208},
pages = {118171}, 1108 1108 pages = {118171},
year = {2022}, 1109 1109 year = {2022},
issn = {0957-4174}, 1110 1110 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2022.118171}, 1111 1111 doi = {https://doi.org/10.1016/j.eswa.2022.118171},
url = {https://www.sciencedirect.com/science/article/pii/S0957417422013392}, 1112 1112 url = {https://www.sciencedirect.com/science/article/pii/S0957417422013392},
author = {Julia Clemente and Héctor Yago and Javier {de Pedro-Carracedo} and Javier Bueno}, 1113 1113 author = {Julia Clemente and Héctor Yago and Javier {de Pedro-Carracedo} and Javier Bueno},
keywords = {Recommender system, , Ontology network, Methodological development, Student modeling}, 1114 1114 keywords = {Recommender system, , Ontology network, Methodological development, Student modeling},
abstract = {Context: 1115 1115 abstract = {Context:
Competences represent an interesting pedagogical support in many processes like diagnosis or recommendation. From these, it is possible to infer information about the progress of the student to provide help targeted both, trainers who must make adaptive tutoring decisions for each learner, and students to detect and correct their learning weaknesses. For the correct development of any of these tasks, it is important to have a suitable student model that allows the representation of the most significant information possible about the student. Additionally, it would be very advantageous for this modeling to incorporate mechanisms from which it would be possible to infer more information about the student’s state of knowledge. 1116 1116 Competences represent an interesting pedagogical support in many processes like diagnosis or recommendation. From these, it is possible to infer information about the progress of the student to provide help targeted both, trainers who must make adaptive tutoring decisions for each learner, and students to detect and correct their learning weaknesses. For the correct development of any of these tasks, it is important to have a suitable student model that allows the representation of the most significant information possible about the student. Additionally, it would be very advantageous for this modeling to incorporate mechanisms from which it would be possible to infer more information about the student’s state of knowledge.
Objective: 1117 1117 Objective:
To facilitate this goal, in this paper a new approach to develop an adaptive competence-based recommender system is proposed. 1118 1118 To facilitate this goal, in this paper a new approach to develop an adaptive competence-based recommender system is proposed.
Method: 1119 1119 Method:
We present a methodological development guide as well as a set of ontological and non-ontological resources to develop and adapt the prototype of the proposed recommender system. 1120 1120 We present a methodological development guide as well as a set of ontological and non-ontological resources to develop and adapt the prototype of the proposed recommender system.
Results: 1121 1121 Results:
A modular flexible ontology network previously built for this purpose has been extended, which is responsible for recording the instructional design and student information. Furthermore, we describe a case study based on a first aid learning experience to assess the prototype with the proposed methodology. 1122 1122 A modular flexible ontology network previously built for this purpose has been extended, which is responsible for recording the instructional design and student information. Furthermore, we describe a case study based on a first aid learning experience to assess the prototype with the proposed methodology.
Conclusions: 1123 1123 Conclusions:
We highlight the relevance of flexibility and adaptability in learning modeling and recommendation processes. In order to promote improvement in the personalized learning of students, we present a Recommender System prototype taking advantages of ontologies, with a methodological guide, a broad taxonomy of recommendation criteria and the nature of competences. Future lines of research lines, including a more comprehensive evaluation of the system, will allow us to demonstrate in depth its adaptability according to the characteristics of the student, flexibility and extensibility for its integration in various environments and domains.} 1124 1124 We highlight the relevance of flexibility and adaptability in learning modeling and recommendation processes. In order to promote improvement in the personalized learning of students, we present a Recommender System prototype taking advantages of ontologies, with a methodological guide, a broad taxonomy of recommendation criteria and the nature of competences. Future lines of research lines, including a more comprehensive evaluation of the system, will allow us to demonstrate in depth its adaptability according to the characteristics of the student, flexibility and extensibility for its integration in various environments and domains.}
} 1125 1125 }
1126 1126
@article{https://doi.org/10.1155/2023/2578286, 1127 1127 @article{https://doi.org/10.1155/2023/2578286,
author = {Li, Linqing and Wang, Zhifeng}, 1128 1128 author = {Li, Linqing and Wang, Zhifeng},
title = {Knowledge Graph-Enhanced Intelligent Tutoring System Based on Exercise Representativeness and Informativeness}, 1129 1129 title = {Knowledge Graph-Enhanced Intelligent Tutoring System Based on Exercise Representativeness and Informativeness},
journal = {International Journal of Intelligent Systems}, 1130 1130 journal = {International Journal of Intelligent Systems},
volume = {2023}, 1131 1131 volume = {2023},
number = {1}, 1132 1132 number = {1},
pages = {2578286}, 1133 1133 pages = {2578286},
doi = {https://doi.org/10.1155/2023/2578286}, 1134 1134 doi = {https://doi.org/10.1155/2023/2578286},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2023/2578286}, 1135 1135 url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2023/2578286},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2023/2578286}, 1136 1136 eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2023/2578286},
abstract = {In the realm of online tutoring intelligent systems, e-learners are exposed to a substantial volume of learning content. The extraction and organization of exercises and skills hold significant importance in establishing clear learning objectives and providing appropriate exercise recommendations. Presently, knowledge graph-based recommendation algorithms have garnered considerable attention among researchers. However, these algorithms solely consider knowledge graphs with single relationships and do not effectively model exercise-rich features, such as exercise representativeness and informativeness. Consequently, this paper proposes a framework, namely, the Knowledge Graph Importance-Exercise Representativeness and Informativeness Framework, to address these two issues. The framework consists of four intricate components and a novel cognitive diagnosis model called the Neural Attentive Cognitive Diagnosis model to recommend the proper exercises. These components encompass the informativeness component, exercise representation component, knowledge importance component, and exercise representativeness component. The informativeness component evaluates the informational value of each exercise and identifies the candidate exercise set (EC) that exhibits the highest exercise informativeness. Moreover, the exercise representation component utilizes a graph neural network to process student records. The output of the graph neural network serves as the input for exercise-level attention and skill-level attention, ultimately generating exercise embeddings and skill embeddings. Furthermore, the skill embeddings are employed as input for the knowledge importance component. This component transforms a one-dimensional knowledge graph into a multidimensional one through four class relations and calculates skill importance weights based on novelty and popularity. Subsequently, the exercise representativeness component incorporates exercise weight knowledge coverage to select exercises from the candidate exercise set for the tested exercise set. Lastly, the cognitive diagnosis model leverages exercise representation and skill importance weights to predict student performance on the test set and estimate their knowledge state. To evaluate the effectiveness of our selection strategy, extensive experiments were conducted on two types of publicly available educational datasets. The experimental results demonstrate that our framework can recommend appropriate exercises to students, leading to improved student performance.}, 1137 1137 abstract = {In the realm of online tutoring intelligent systems, e-learners are exposed to a substantial volume of learning content. The extraction and organization of exercises and skills hold significant importance in establishing clear learning objectives and providing appropriate exercise recommendations. Presently, knowledge graph-based recommendation algorithms have garnered considerable attention among researchers. However, these algorithms solely consider knowledge graphs with single relationships and do not effectively model exercise-rich features, such as exercise representativeness and informativeness. Consequently, this paper proposes a framework, namely, the Knowledge Graph Importance-Exercise Representativeness and Informativeness Framework, to address these two issues. The framework consists of four intricate components and a novel cognitive diagnosis model called the Neural Attentive Cognitive Diagnosis model to recommend the proper exercises. These components encompass the informativeness component, exercise representation component, knowledge importance component, and exercise representativeness component. The informativeness component evaluates the informational value of each exercise and identifies the candidate exercise set (EC) that exhibits the highest exercise informativeness. Moreover, the exercise representation component utilizes a graph neural network to process student records. The output of the graph neural network serves as the input for exercise-level attention and skill-level attention, ultimately generating exercise embeddings and skill embeddings. Furthermore, the skill embeddings are employed as input for the knowledge importance component. This component transforms a one-dimensional knowledge graph into a multidimensional one through four class relations and calculates skill importance weights based on novelty and popularity. Subsequently, the exercise representativeness component incorporates exercise weight knowledge coverage to select exercises from the candidate exercise set for the tested exercise set. Lastly, the cognitive diagnosis model leverages exercise representation and skill importance weights to predict student performance on the test set and estimate their knowledge state. To evaluate the effectiveness of our selection strategy, extensive experiments were conducted on two types of publicly available educational datasets. The experimental results demonstrate that our framework can recommend appropriate exercises to students, leading to improved student performance.},
year = {2023} 1138 1138 year = {2023}
} 1139 1139 }
1140 1140
@inproceedings{badier:hal-04092828, 1141 1141 @inproceedings{badier:hal-04092828,
TITLE = {{Comprendre les usages et effets d'un syst{\`e}me de recommandations p{\'e}dagogiques en contexte d'apprentissage non-formel}}, 1142 1142 TITLE = {{Comprendre les usages et effets d'un syst{\`e}me de recommandations p{\'e}dagogiques en contexte d'apprentissage non-formel}},
AUTHOR = {Badier, Ana{\"e}lle and Lefort, Mathieu and Lefevre, Marie}, 1143 1143 AUTHOR = {Badier, Ana{\"e}lle and Lefort, Mathieu and Lefevre, Marie},
URL = {https://hal.science/hal-04092828}, 1144 1144 URL = {https://hal.science/hal-04092828},
BOOKTITLE = {{EIAH'23}}, 1145 1145 BOOKTITLE = {{EIAH'23}},
ADDRESS = {Brest, France}, 1146 1146 ADDRESS = {Brest, France},
YEAR = {2023}, 1147 1147 YEAR = {2023},
MONTH = Jun, 1148 1148 MONTH = Jun,
HAL_ID = {hal-04092828}, 1149 1149 HAL_ID = {hal-04092828},
HAL_VERSION = {v1}, 1150 1150 HAL_VERSION = {v1},
} 1151 1151 }
1152 1152
@article{BADRA2023108920, 1153 1153 @article{BADRA2023108920,
title = {Case-based prediction – A survey}, 1154 1154 title = {Case-based prediction – A survey},
journal = {International Journal of Approximate Reasoning}, 1155 1155 journal = {International Journal of Approximate Reasoning},
volume = {158}, 1156 1156 volume = {158},
pages = {108920}, 1157 1157 pages = {108920},
year = {2023}, 1158 1158 year = {2023},
issn = {0888-613X}, 1159 1159 issn = {0888-613X},
doi = {https://doi.org/10.1016/j.ijar.2023.108920}, 1160 1160 doi = {https://doi.org/10.1016/j.ijar.2023.108920},
url = {https://www.sciencedirect.com/science/article/pii/S0888613X23000440}, 1161 1161 url = {https://www.sciencedirect.com/science/article/pii/S0888613X23000440},
author = {Fadi Badra and Marie-Jeanne Lesot}, 1162 1162 author = {Fadi Badra and Marie-Jeanne Lesot},
keywords = {Case-based prediction, Analogical transfer, Similarity}, 1163 1163 keywords = {Case-based prediction, Analogical transfer, Similarity},
abstract = {This paper clarifies the relation between case-based prediction and analogical transfer. Case-based prediction consists in predicting the outcome associated with a new case directly from its comparison with a set of cases retrieved from a case base, by relying solely on a structured memory and some similarity measures. Analogical transfer is a cognitive process that allows to derive some new information about a target situation by applying a plausible inference principle, according to which if two situations are similar with respect to some criteria, then it is plausible that they are also similar with respect to other criteria. Case-based prediction algorithms are known to apply analogical transfer to make predictions, but the existing approaches are diverse, and developing a unified theory of case-based prediction remains a challenge. In this paper, we show that a common principle underlying case-based prediction methods is that they interpret the plausible inference as a transfer of similarity knowledge from a situation space to an outcome space. Among all potential outcomes, the predicted outcome is the one that optimizes this transfer, i.e., that makes the similarities in the outcome space most compatible with the observed similarities in the situation space. Based on this observation, a systematic analysis of the different theories of case-based prediction is presented, where the approaches are distinguished according to the type of knowledge used to measure the compatibility between the two sets of similarity relations.} 1164 1164 abstract = {This paper clarifies the relation between case-based prediction and analogical transfer. Case-based prediction consists in predicting the outcome associated with a new case directly from its comparison with a set of cases retrieved from a case base, by relying solely on a structured memory and some similarity measures. Analogical transfer is a cognitive process that allows to derive some new information about a target situation by applying a plausible inference principle, according to which if two situations are similar with respect to some criteria, then it is plausible that they are also similar with respect to other criteria. Case-based prediction algorithms are known to apply analogical transfer to make predictions, but the existing approaches are diverse, and developing a unified theory of case-based prediction remains a challenge. In this paper, we show that a common principle underlying case-based prediction methods is that they interpret the plausible inference as a transfer of similarity knowledge from a situation space to an outcome space. Among all potential outcomes, the predicted outcome is the one that optimizes this transfer, i.e., that makes the similarities in the outcome space most compatible with the observed similarities in the situation space. Based on this observation, a systematic analysis of the different theories of case-based prediction is presented, where the approaches are distinguished according to the type of knowledge used to measure the compatibility between the two sets of similarity relations.}
} 1165 1165 }
1166 1166
1167 1167
@Article{jmse11050890 , 1168 1168 @Article{jmse11050890 ,
AUTHOR = {Louvros, Panagiotis and Stefanidis, Fotios and Boulougouris, Evangelos and Komianos, Alexandros and Vassalos, Dracos}, 1169 1169 AUTHOR = {Louvros, Panagiotis and Stefanidis, Fotios and Boulougouris, Evangelos and Komianos, Alexandros and Vassalos, Dracos},
TITLE = {Machine Learning and Case-Based Reasoning for Real-Time Onboard Prediction of the Survivability of Ships}, 1170 1170 TITLE = {Machine Learning and Case-Based Reasoning for Real-Time Onboard Prediction of the Survivability of Ships},
JOURNAL = {Journal of Marine Science and Engineering}, 1171 1171 JOURNAL = {Journal of Marine Science and Engineering},
VOLUME = {11}, 1172 1172 VOLUME = {11},
YEAR = {2023}, 1173 1173 YEAR = {2023},
NUMBER = {5}, 1174 1174 NUMBER = {5},
ARTICLE-NUMBER = {890}, 1175 1175 ARTICLE-NUMBER = {890},
URL = {https://www.mdpi.com/2077-1312/11/5/890}, 1176 1176 URL = {https://www.mdpi.com/2077-1312/11/5/890},
ISSN = {2077-1312}, 1177 1177 ISSN = {2077-1312},
ABSTRACT = {The subject of damaged stability has greatly profited from the development of new tools and techniques in recent history. Specifically, the increased computational power and the probabilistic approach have transformed the subject, increasing accuracy and fidelity, hence allowing for a universal application and the inclusion of the most probable scenarios. Currently, all ships are evaluated for their stability and are expected to survive the dangers they will most likely face. However, further advancements in simulations have made it possible to further increase the fidelity and accuracy of simulated casualties. Multiple time domain and, to a lesser extent, Computational Fluid dynamics (CFD) solutions have been suggested as the next “evolutionary” step for damage stability. However, while those techniques are demonstrably more accurate, the computational power to utilize them for the task of probabilistic evaluation is not there yet. In this paper, the authors present a novel approach that aims to serve as a stopgap measure for introducing the time domain simulations in the existing framework. Specifically, the methodology presented serves the purpose of a fast decision support tool which is able to provide information regarding the ongoing casualty utilizing prior knowledge gained from simulations. This work was needed and developed for the purposes of the EU-funded project SafePASS.}, 1178 1178 ABSTRACT = {The subject of damaged stability has greatly profited from the development of new tools and techniques in recent history. Specifically, the increased computational power and the probabilistic approach have transformed the subject, increasing accuracy and fidelity, hence allowing for a universal application and the inclusion of the most probable scenarios. Currently, all ships are evaluated for their stability and are expected to survive the dangers they will most likely face. However, further advancements in simulations have made it possible to further increase the fidelity and accuracy of simulated casualties. Multiple time domain and, to a lesser extent, Computational Fluid dynamics (CFD) solutions have been suggested as the next “evolutionary” step for damage stability. However, while those techniques are demonstrably more accurate, the computational power to utilize them for the task of probabilistic evaluation is not there yet. In this paper, the authors present a novel approach that aims to serve as a stopgap measure for introducing the time domain simulations in the existing framework. Specifically, the methodology presented serves the purpose of a fast decision support tool which is able to provide information regarding the ongoing casualty utilizing prior knowledge gained from simulations. This work was needed and developed for the purposes of the EU-funded project SafePASS.},
DOI = {10.3390/jmse11050890} 1179 1179 DOI = {10.3390/jmse11050890}
} 1180 1180 }
1181 1181
1182 1182
@Article{su14031366, 1183 1183 @Article{su14031366,
AUTHOR = {Chun, Se-Hak and Jang, Jae-Won}, 1184 1184 AUTHOR = {Chun, Se-Hak and Jang, Jae-Won},
TITLE = {A New Trend Pattern-Matching Method of Interactive Case-Based Reasoning for Stock Price Predictions}, 1185 1185 TITLE = {A New Trend Pattern-Matching Method of Interactive Case-Based Reasoning for Stock Price Predictions},
JOURNAL = {Sustainability}, 1186 1186 JOURNAL = {Sustainability},
VOLUME = {14}, 1187 1187 VOLUME = {14},
YEAR = {2022}, 1188 1188 YEAR = {2022},
NUMBER = {3}, 1189 1189 NUMBER = {3},
ARTICLE-NUMBER = {1366}, 1190 1190 ARTICLE-NUMBER = {1366},
URL = {https://www.mdpi.com/2071-1050/14/3/1366}, 1191 1191 URL = {https://www.mdpi.com/2071-1050/14/3/1366},
ISSN = {2071-1050}, 1192 1192 ISSN = {2071-1050},
ABSTRACT = {In this paper, we suggest a new case-based reasoning method for stock price predictions using the knowledge of traders to select similar past patterns among nearest neighbors obtained from a traditional case-based reasoning machine. Thus, this method overcomes the limitation of conventional case-based reasoning, which does not consider how to retrieve similar neighbors from previous patterns in terms of a graphical pattern. In this paper, we show how the proposed method can be used when traders find similar time series patterns among nearest cases. For this, we suggest an interactive prediction system where traders can select similar patterns with individual knowledge among automatically recommended neighbors by case-based reasoning. In this paper, we demonstrate how traders can use their knowledge to select similar patterns using a graphical interface, serving as an exemplar for the target. These concepts are investigated against the backdrop of a practical application involving the prediction of three individual stock prices, i.e., Zoom, Airbnb, and Twitter, as well as the prediction of the Dow Jones Industrial Average (DJIA). The verification of the prediction results is compared with a random walk model based on the RMSE and Hit ratio. The results show that the proposed technique is more effective than the random walk model but it does not statistically surpass the random walk model.}, 1193 1193 ABSTRACT = {In this paper, we suggest a new case-based reasoning method for stock price predictions using the knowledge of traders to select similar past patterns among nearest neighbors obtained from a traditional case-based reasoning machine. Thus, this method overcomes the limitation of conventional case-based reasoning, which does not consider how to retrieve similar neighbors from previous patterns in terms of a graphical pattern. In this paper, we show how the proposed method can be used when traders find similar time series patterns among nearest cases. For this, we suggest an interactive prediction system where traders can select similar patterns with individual knowledge among automatically recommended neighbors by case-based reasoning. In this paper, we demonstrate how traders can use their knowledge to select similar patterns using a graphical interface, serving as an exemplar for the target. These concepts are investigated against the backdrop of a practical application involving the prediction of three individual stock prices, i.e., Zoom, Airbnb, and Twitter, as well as the prediction of the Dow Jones Industrial Average (DJIA). The verification of the prediction results is compared with a random walk model based on the RMSE and Hit ratio. The results show that the proposed technique is more effective than the random walk model but it does not statistically surpass the random walk model.},
DOI = {10.3390/su14031366} 1194 1194 DOI = {10.3390/su14031366}
} 1195 1195 }
1196 1196
@Article{fire7040107, 1197 1197 @Article{fire7040107,
AUTHOR = {Pei, Qiuyan and Jia, Zhichao and Liu, Jia and Wang, Yi and Wang, Junhui and Zhang, Yanqi}, 1198 1198 AUTHOR = {Pei, Qiuyan and Jia, Zhichao and Liu, Jia and Wang, Yi and Wang, Junhui and Zhang, Yanqi},
TITLE = {Prediction of Coal Spontaneous Combustion Hazard Grades Based on Fuzzy Clustered Case-Based Reasoning}, 1199 1199 TITLE = {Prediction of Coal Spontaneous Combustion Hazard Grades Based on Fuzzy Clustered Case-Based Reasoning},
JOURNAL = {Fire}, 1200 1200 JOURNAL = {Fire},
VOLUME = {7}, 1201 1201 VOLUME = {7},
YEAR = {2024}, 1202 1202 YEAR = {2024},
NUMBER = {4}, 1203 1203 NUMBER = {4},
ARTICLE-NUMBER = {107}, 1204 1204 ARTICLE-NUMBER = {107},
URL = {https://www.mdpi.com/2571-6255/7/4/107}, 1205 1205 URL = {https://www.mdpi.com/2571-6255/7/4/107},
ISSN = {2571-6255}, 1206 1206 ISSN = {2571-6255},
ABSTRACT = {Accurate prediction of the coal spontaneous combustion hazard grades is of great significance to ensure the safe production of coal mines. However, traditional coal temperature prediction models have low accuracy and do not predict the coal spontaneous combustion hazard grades. In order to accurately predict coal spontaneous combustion hazard grades, a prediction model of coal spontaneous combustion based on principal component analysis (PCA), case-based reasoning (CBR), fuzzy clustering (FM), and the snake optimization (SO) algorithm was proposed in this manuscript. Firstly, based on the change rule of the concentration of signature gases in the process of coal warming, a new method of classifying the risk of spontaneous combustion of coal was established. Secondly, MeanRadius-SMOTE was adopted to balance the data structure. The weights of the prediction indicators were calculated through PCA to enhance the prediction precision of the CBR model. Then, by employing FM in the case base, the computational cost of CBR was reduced and its computational efficiency was improved. The SO algorithm was used to determine the hyperparameters in the PCA-FM-CBR model. In addition, multiple comparative experiments were conducted to verify the superiority of the model proposed in this manuscript. The results indicated that SO-PCA-FM-CBR possesses good prediction performance and also improves computational efficiency. Finally, the authors of this manuscript adopted the Random Balance Designs—Fourier Amplitude Sensitivity Test (RBD-FAST) to explain the output of the model and analyzed the global importance of input variables. The results demonstrated that CO is the most important variable affecting the coal spontaneous combustion hazard grades.}, 1207 1207 ABSTRACT = {Accurate prediction of the coal spontaneous combustion hazard grades is of great significance to ensure the safe production of coal mines. However, traditional coal temperature prediction models have low accuracy and do not predict the coal spontaneous combustion hazard grades. In order to accurately predict coal spontaneous combustion hazard grades, a prediction model of coal spontaneous combustion based on principal component analysis (PCA), case-based reasoning (CBR), fuzzy clustering (FM), and the snake optimization (SO) algorithm was proposed in this manuscript. Firstly, based on the change rule of the concentration of signature gases in the process of coal warming, a new method of classifying the risk of spontaneous combustion of coal was established. Secondly, MeanRadius-SMOTE was adopted to balance the data structure. The weights of the prediction indicators were calculated through PCA to enhance the prediction precision of the CBR model. Then, by employing FM in the case base, the computational cost of CBR was reduced and its computational efficiency was improved. The SO algorithm was used to determine the hyperparameters in the PCA-FM-CBR model. In addition, multiple comparative experiments were conducted to verify the superiority of the model proposed in this manuscript. The results indicated that SO-PCA-FM-CBR possesses good prediction performance and also improves computational efficiency. Finally, the authors of this manuscript adopted the Random Balance Designs—Fourier Amplitude Sensitivity Test (RBD-FAST) to explain the output of the model and analyzed the global importance of input variables. The results demonstrated that CO is the most important variable affecting the coal spontaneous combustion hazard grades.},
DOI = {10.3390/fire7040107} 1208 1208 DOI = {10.3390/fire7040107}
} 1209 1209 }
1210 1210
@Article{Desmarais2012, 1211 1211 @Article{Desmarais2012,
author={Desmarais, Michel C. 1212 1212 author={Desmarais, Michel C.
and Baker, Ryan S. J. d.}, 1213 1213 and Baker, Ryan S. J. d.},
title={A review of recent advances in learner and skill modeling in intelligent learning environments}, 1214 1214 title={A review of recent advances in learner and skill modeling in intelligent learning environments},
journal={User Modeling and User-Adapted Interaction}, 1215 1215 journal={User Modeling and User-Adapted Interaction},
year={2012}, 1216 1216 year={2012},
month={Apr}, 1217 1217 month={Apr},
day={01}, 1218 1218 day={01},
volume={22}, 1219 1219 volume={22},
number={1}, 1220 1220 number={1},
pages={9-38}, 1221 1221 pages={9-38},
abstract={In recent years, learner models have emerged from the research laboratory and research classrooms into the wider world. Learner models are now embedded in real world applications which can claim to have thousands, or even hundreds of thousands, of users. Probabilistic models for skill assessment are playing a key role in these advanced learning environments. In this paper, we review the learner models that have played the largest roles in the success of these learning environments, and also the latest advances in the modeling and assessment of learner skills. We conclude by discussing related advancements in modeling other key constructs such as learner motivation, emotional and attentional state, meta-cognition and self-regulated learning, group learning, and the recent movement towards open and shared learner models.}, 1222 1222 abstract={In recent years, learner models have emerged from the research laboratory and research classrooms into the wider world. Learner models are now embedded in real world applications which can claim to have thousands, or even hundreds of thousands, of users. Probabilistic models for skill assessment are playing a key role in these advanced learning environments. In this paper, we review the learner models that have played the largest roles in the success of these learning environments, and also the latest advances in the modeling and assessment of learner skills. We conclude by discussing related advancements in modeling other key constructs such as learner motivation, emotional and attentional state, meta-cognition and self-regulated learning, group learning, and the recent movement towards open and shared learner models.},
issn={1573-1391}, 1223 1223 issn={1573-1391},
doi={10.1007/s11257-011-9106-8}, 1224 1224 doi={10.1007/s11257-011-9106-8},
url={https://doi.org/10.1007/s11257-011-9106-8} 1225 1225 url={https://doi.org/10.1007/s11257-011-9106-8}
} 1226 1226 }
1227 1227
@article{Eide, 1228 1228 @article{Eide,
title={Dynamic slate recommendation with gated recurrent units and Thompson sampling}, 1229 1229 title={Dynamic slate recommendation with gated recurrent units and Thompson sampling},
author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo}, 1230 1230 author={Eide, Simen and Leslie, David S. and Frigessi, Arnoldo},
language={English}, 1231 1231 language={English},
type={article}, 1232 1232 type={article},
volume = {36}, 1233 1233 volume = {36},
year = {2022}, 1234 1234 year = {2022},
issn = {1573-756X}, 1235 1235 issn = {1573-756X},
doi = {https://doi.org/10.1007/s10618-022-00849-w}, 1236 1236 doi = {https://doi.org/10.1007/s10618-022-00849-w},
url = {https://doi.org/10.1007/s10618-022-00849-w}, 1237 1237 url = {https://doi.org/10.1007/s10618-022-00849-w},
abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.} 1238 1238 abstract={We consider the problem of recommending relevant content to users of an internet platform in the form of lists of items, called slates. We introduce a variational Bayesian Recurrent Neural Net recommender system that acts on time series of interactions between the internet platform and the user, and which scales to real world industrial situations. The recommender system is tested both online on real users, and on an offline dataset collected from a Norwegian web-based marketplace, FINN.no, that is made public for research. This is one of the first publicly available datasets which includes all the slates that are presented to users as well as which items (if any) in the slates were clicked on. Such a data set allows us to move beyond the common assumption that implicitly assumes that users are considering all possible items at each interaction. Instead we build our likelihood using the items that are actually in the slate, and evaluate the strengths and weaknesses of both approaches theoretically and in experiments. We also introduce a hierarchical prior for the item parameters based on group memberships. Both item parameters and user preferences are learned probabilistically. Furthermore, we combine our model with bandit strategies to ensure learning, and introduce ‘in-slate Thompson sampling’ which makes use of the slates to maximise explorative opportunities. We show experimentally that explorative recommender strategies perform on par or above their greedy counterparts. Even without making use of exploration to learn more effectively, click rates increase simply because of improved diversity in the recommended slates.}
} 1239 1239 }
1240 1240
@InProceedings{10.1007/978-3-031-09680-8_14, 1241 1241 @InProceedings{10.1007/978-3-031-09680-8_14,
author={Sablayrolles, Louis 1242 1242 author={Sablayrolles, Louis
and Lefevre, Marie 1243 1243 and Lefevre, Marie
and Guin, Nathalie 1244 1244 and Guin, Nathalie
and Broisin, Julien}, 1245 1245 and Broisin, Julien},
editor={Crossley, Scott 1246 1246 editor={Crossley, Scott
and Popescu, Elvira}, 1247 1247 and Popescu, Elvira},
title={Design and Evaluation of a Competency-Based Recommendation Process}, 1248 1248 title={Design and Evaluation of a Competency-Based Recommendation Process},
booktitle={Intelligent Tutoring Systems}, 1249 1249 booktitle={Intelligent Tutoring Systems},
year={2022}, 1250 1250 year={2022},
publisher={Springer International Publishing}, 1251 1251 publisher={Springer International Publishing},
address={Cham}, 1252 1252 address={Cham},
pages={148--160}, 1253 1253 pages={148--160},
abstract={The purpose of recommending activities to learners is to provide them with resources adapted to their needs, to facilitate the learning process. However, when teachers face a large number of students, it is difficult for them to recommend a personalized list of resources to each learner. In this paper, we are interested in the design of a system that automatically recommends resources to learners using their cognitive profile expressed in terms of competencies, but also according to a specific strategy defined by teachers. Our contributions relate to (1) a competency-based pedagogical strategy allowing to express the teacher's expertise, and (2) a recommendation process based on this strategy. This process has been experimented and assessed with students learning Shell programming in a first-year computer science degree. The first results show that (i) the items selected by our system from the set of possible items were relevant according to the experts; (ii) our system provided recommendations in a reasonable time; (iii) the recommendations were consulted by the learners but lacked usability.}, 1254 1254 abstract={The purpose of recommending activities to learners is to provide them with resources adapted to their needs, to facilitate the learning process. However, when teachers face a large number of students, it is difficult for them to recommend a personalized list of resources to each learner. In this paper, we are interested in the design of a system that automatically recommends resources to learners using their cognitive profile expressed in terms of competencies, but also according to a specific strategy defined by teachers. Our contributions relate to (1) a competency-based pedagogical strategy allowing to express the teacher's expertise, and (2) a recommendation process based on this strategy. This process has been experimented and assessed with students learning Shell programming in a first-year computer science degree. The first results show that (i) the items selected by our system from the set of possible items were relevant according to the experts; (ii) our system provided recommendations in a reasonable time; (iii) the recommendations were consulted by the learners but lacked usability.},
isbn={978-3-031-09680-8} 1255 1255 isbn={978-3-031-09680-8}
} 1256 1256 }
1257 1257
@inproceedings{10.1145/3578337.3605122, 1258 1258 @inproceedings{10.1145/3578337.3605122,
author = {Xu, Shuyuan and Ge, Yingqiang and Li, Yunqi and Fu, Zuohui and Chen, Xu and Zhang, Yongfeng}, 1259 1259 author = {Xu, Shuyuan and Ge, Yingqiang and Li, Yunqi and Fu, Zuohui and Chen, Xu and Zhang, Yongfeng},
title = {Causal Collaborative Filtering}, 1260 1260 title = {Causal Collaborative Filtering},
year = {2023}, 1261 1261 year = {2023},
isbn = {9798400700736}, 1262 1262 isbn = {9798400700736},
publisher = {Association for Computing Machinery}, 1263 1263 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 1264 1264 address = {New York, NY, USA},
url = {https://doi.org/10.1145/3578337.3605122}, 1265 1265 url = {https://doi.org/10.1145/3578337.3605122},
doi = {10.1145/3578337.3605122}, 1266 1266 doi = {10.1145/3578337.3605122},
abstract = {Many of the traditional recommendation algorithms are designed based on the fundamental idea of mining or learning correlative patterns from data to estimate the user-item correlative preference. However, pure correlative learning may lead to Simpson's paradox in predictions, and thus results in sacrificed recommendation performance. Simpson's paradox is a well-known statistical phenomenon, which causes confusions in statistical conclusions and ignoring the paradox may result in inaccurate decisions. Fortunately, causal and counterfactual modeling can help us to think outside of the observational data for user modeling and personalization so as to tackle such issues. In this paper, we propose Causal Collaborative Filtering (CCF) --- a general framework for modeling causality in collaborative filtering and recommendation. We provide a unified causal view of CF and mathematically show that many of the traditional CF algorithms are actually special cases of CCF under simplified causal graphs. We then propose a conditional intervention approach for do-operations so that we can estimate the user-item causal preference based on the observational data. Finally, we further propose a general counterfactual constrained learning framework for estimating the user-item preferences. Experiments are conducted on two types of real-world datasets---traditional and randomized trial data---and results show that our framework can improve the recommendation performance and reduce the Simpson's paradox problem of many CF algorithms.}, 1267 1267 abstract = {Many of the traditional recommendation algorithms are designed based on the fundamental idea of mining or learning correlative patterns from data to estimate the user-item correlative preference. However, pure correlative learning may lead to Simpson's paradox in predictions, and thus results in sacrificed recommendation performance. Simpson's paradox is a well-known statistical phenomenon, which causes confusions in statistical conclusions and ignoring the paradox may result in inaccurate decisions. Fortunately, causal and counterfactual modeling can help us to think outside of the observational data for user modeling and personalization so as to tackle such issues. In this paper, we propose Causal Collaborative Filtering (CCF) --- a general framework for modeling causality in collaborative filtering and recommendation. We provide a unified causal view of CF and mathematically show that many of the traditional CF algorithms are actually special cases of CCF under simplified causal graphs. We then propose a conditional intervention approach for do-operations so that we can estimate the user-item causal preference based on the observational data. Finally, we further propose a general counterfactual constrained learning framework for estimating the user-item preferences. Experiments are conducted on two types of real-world datasets---traditional and randomized trial data---and results show that our framework can improve the recommendation performance and reduce the Simpson's paradox problem of many CF algorithms.},
booktitle = {Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval}, 1268 1268 booktitle = {Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval},
pages = {235–245}, 1269 1269 pages = {235–245},
numpages = {11}, 1270 1270 numpages = {11},
keywords = {recommender systems, counterfactual reasoning, collaborative filtering, causal analysis, Simpson's paradox}, 1271 1271 keywords = {recommender systems, counterfactual reasoning, collaborative filtering, causal analysis, Simpson's paradox},
location = {Taipei, Taiwan}, 1272 1272 location = {Taipei, Taiwan},
series = {ICTIR '23} 1273 1273 series = {ICTIR '23}
} 1274 1274 }
1275 1275
@inproceedings{10.1145/3583780.3615048, 1276 1276 @inproceedings{10.1145/3583780.3615048,
author = {Zhu, Zheqing and Van Roy, Benjamin}, 1277 1277 author = {Zhu, Zheqing and Van Roy, Benjamin},
title = {Scalable Neural Contextual Bandit for Recommender Systems}, 1278 1278 title = {Scalable Neural Contextual Bandit for Recommender Systems},
year = {2023}, 1279 1279 year = {2023},
isbn = {9798400701245}, 1280 1280 isbn = {9798400701245},
publisher = {Association for Computing Machinery}, 1281 1281 publisher = {Association for Computing Machinery},
address = {New York, NY, USA}, 1282 1282 address = {New York, NY, USA},
url = {https://doi.org/10.1145/3583780.3615048}, 1283 1283 url = {https://doi.org/10.1145/3583780.3615048},
doi = {10.1145/3583780.3615048}, 1284 1284 doi = {10.1145/3583780.3615048},
abstract = {High-quality recommender systems ought to deliver both innovative and relevant content through effective and exploratory interactions with users. Yet, supervised learning-based neural networks, which form the backbone of many existing recommender systems, only leverage recognized user interests, falling short when it comes to efficiently uncovering unknown user preferences. While there has been some progress with neural contextual bandit algorithms towards enabling online exploration through neural networks, their onerous computational demands hinder widespread adoption in real-world recommender systems. In this work, we propose a scalable sample-efficient neural contextual bandit algorithm for recommender systems. To do this, we design an epistemic neural network architecture, Epistemic Neural Recommendation (ENR), that enables Thompson sampling at a large scale. In two distinct large-scale experiments with real-world tasks, ENR significantly boosts click-through rates and user ratings by at least 9\% and 6\% respectively compared to state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves equivalent performance with at least 29\% fewer user interactions compared to the best-performing baseline algorithm. Remarkably, while accomplishing these improvements, ENR demands orders of magnitude fewer computational resources than neural contextual bandit baseline algorithms.}, 1285 1285 abstract = {High-quality recommender systems ought to deliver both innovative and relevant content through effective and exploratory interactions with users. Yet, supervised learning-based neural networks, which form the backbone of many existing recommender systems, only leverage recognized user interests, falling short when it comes to efficiently uncovering unknown user preferences. While there has been some progress with neural contextual bandit algorithms towards enabling online exploration through neural networks, their onerous computational demands hinder widespread adoption in real-world recommender systems. In this work, we propose a scalable sample-efficient neural contextual bandit algorithm for recommender systems. To do this, we design an epistemic neural network architecture, Epistemic Neural Recommendation (ENR), that enables Thompson sampling at a large scale. In two distinct large-scale experiments with real-world tasks, ENR significantly boosts click-through rates and user ratings by at least 9\% and 6\% respectively compared to state-of-the-art neural contextual bandit algorithms. Furthermore, it achieves equivalent performance with at least 29\% fewer user interactions compared to the best-performing baseline algorithm. Remarkably, while accomplishing these improvements, ENR demands orders of magnitude fewer computational resources than neural contextual bandit baseline algorithms.},
booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management}, 1286 1286 booktitle = {Proceedings of the 32nd ACM International Conference on Information and Knowledge Management},
pages = {3636–3646}, 1287 1287 pages = {3636–3646},
numpages = {11}, 1288 1288 numpages = {11},
keywords = {contextual bandits, decision making under uncertainty, exploration vs exploitation, recommender systems, reinforcement learning}, 1289 1289 keywords = {contextual bandits, decision making under uncertainty, exploration vs exploitation, recommender systems, reinforcement learning},
location = {Birmingham, United Kingdom}, 1290 1290 location = {Birmingham, United Kingdom},
series = {CIKM '23} 1291 1291 series = {CIKM '23}
} 1292 1292 }
1293 1293
@ARTICLE{10494875, 1294 1294 @ARTICLE{10494875,
author={Ghoorchian, Saeed and Kortukov, Evgenii and Maghsudi, Setareh}, 1295 1295 author={Ghoorchian, Saeed and Kortukov, Evgenii and Maghsudi, Setareh},
journal={IEEE Open Journal of Signal Processing}, 1296 1296 journal={IEEE Open Journal of Signal Processing},
title={Non-Stationary Linear Bandits With Dimensionality Reduction for Large-Scale Recommender Systems}, 1297 1297 title={Non-Stationary Linear Bandits With Dimensionality Reduction for Large-Scale Recommender Systems},
year={2024}, 1298 1298 year={2024},
volume={5}, 1299 1299 volume={5},
number={}, 1300 1300 number={},
pages={548-558}, 1301 1301 pages={548-558},
keywords={Vectors;Recommender systems;Decision making;Runtime;Signal processing algorithms;Covariance matrices;Robustness;Decision-making;multi-armed bandit;non-stationary environment;online learning;recommender systems}, 1302 1302 keywords={Vectors;Recommender systems;Decision making;Runtime;Signal processing algorithms;Covariance matrices;Robustness;Decision-making;multi-armed bandit;non-stationary environment;online learning;recommender systems},
doi={10.1109/OJSP.2024.3386490} 1303 1303 doi={10.1109/OJSP.2024.3386490}
} 1304 1304 }
1305 1305
@article{GIANNIKIS2024111752, 1306 1306 @article{GIANNIKIS2024111752,
title = {Reinforcement learning for addressing the cold-user problem in recommender systems}, 1307 1307 title = {Reinforcement learning for addressing the cold-user problem in recommender systems},
journal = {Knowledge-Based Systems}, 1308 1308 journal = {Knowledge-Based Systems},
volume = {294}, 1309 1309 volume = {294},
pages = {111752}, 1310 1310 pages = {111752},
year = {2024}, 1311 1311 year = {2024},
issn = {0950-7051}, 1312 1312 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2024.111752}, 1313 1313 doi = {https://doi.org/10.1016/j.knosys.2024.111752},
url = {https://www.sciencedirect.com/science/article/pii/S0950705124003873}, 1314 1314 url = {https://www.sciencedirect.com/science/article/pii/S0950705124003873},
author = {Stelios Giannikis and Flavius Frasincar and David Boekestijn}, 1315 1315 author = {Stelios Giannikis and Flavius Frasincar and David Boekestijn},
keywords = {Recommender systems, Reinforcement learning, Active learning, Cold-user problem}, 1316 1316 keywords = {Recommender systems, Reinforcement learning, Active learning, Cold-user problem},
abstract = {Recommender systems are widely used in webshops because of their ability to provide users with personalized recommendations. However, the cold-user problem (i.e., recommending items to new users) is an important issue many webshops face. With the recent General Data Protection Regulation in Europe, the use of additional user information such as demographics is not possible without the user’s explicit consent. Several techniques have been proposed to solve the cold-user problem. Many of these techniques utilize Active Learning (AL) methods, which let cold users rate items to provide better recommendations for them. In this research, we propose two novel approaches that combine reinforcement learning with AL to elicit the users’ preferences and provide them with personalized recommendations. We compare reinforcement learning approaches that are either AL-based or item-based, where the latter predicts users’ ratings of an item by using their ratings of similar items. Differently than many of the existing approaches, this comparison is made based on implicit user information. Using a large real-world dataset, we show that the item-based strategy is more accurate than the AL-based strategy as well as several existing AL strategies.} 1317 1317 abstract = {Recommender systems are widely used in webshops because of their ability to provide users with personalized recommendations. However, the cold-user problem (i.e., recommending items to new users) is an important issue many webshops face. With the recent General Data Protection Regulation in Europe, the use of additional user information such as demographics is not possible without the user’s explicit consent. Several techniques have been proposed to solve the cold-user problem. Many of these techniques utilize Active Learning (AL) methods, which let cold users rate items to provide better recommendations for them. In this research, we propose two novel approaches that combine reinforcement learning with AL to elicit the users’ preferences and provide them with personalized recommendations. We compare reinforcement learning approaches that are either AL-based or item-based, where the latter predicts users’ ratings of an item by using their ratings of similar items. Differently than many of the existing approaches, this comparison is made based on implicit user information. Using a large real-world dataset, we show that the item-based strategy is more accurate than the AL-based strategy as well as several existing AL strategies.}
} 1318 1318 }
1319 1319
@article{IFTIKHAR2024121541, 1320 1320 @article{IFTIKHAR2024121541,
title = {A reinforcement learning recommender system using bi-clustering and Markov Decision Process}, 1321 1321 title = {A reinforcement learning recommender system using bi-clustering and Markov Decision Process},
journal = {Expert Systems with Applications}, 1322 1322 journal = {Expert Systems with Applications},
volume = {237}, 1323 1323 volume = {237},
pages = {121541}, 1324 1324 pages = {121541},
year = {2024}, 1325 1325 year = {2024},
issn = {0957-4174}, 1326 1326 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2023.121541}, 1327 1327 doi = {https://doi.org/10.1016/j.eswa.2023.121541},
url = {https://www.sciencedirect.com/science/article/pii/S0957417423020432}, 1328 1328 url = {https://www.sciencedirect.com/science/article/pii/S0957417423020432},
author = {Arta Iftikhar and Mustansar Ali Ghazanfar and Mubbashir Ayub and Saad {Ali Alahmari} and Nadeem Qazi and Julie Wall}, 1329 1329 author = {Arta Iftikhar and Mustansar Ali Ghazanfar and Mubbashir Ayub and Saad {Ali Alahmari} and Nadeem Qazi and Julie Wall},
keywords = {Reinforcement learning, Markov Decision Process, Bi-clustering, Q-learning, Policy}, 1330 1330 keywords = {Reinforcement learning, Markov Decision Process, Bi-clustering, Q-learning, Policy},
abstract = {Collaborative filtering (CF) recommender systems are static in nature and does not adapt well with changing user preferences. User preferences may change after interaction with a system or after buying a product. Conventional CF clustering algorithms only identifies the distribution of patterns and hidden correlations globally. However, the impossibility of discovering local patterns by these algorithms, headed to the popularization of bi-clustering algorithms. Bi-clustering algorithms can analyze all dataset dimensions simultaneously and consequently, discover local patterns that deliver a better understanding of the underlying hidden correlations. In this paper, we modelled the recommendation problem as a sequential decision-making problem using Markov Decision Processes (MDP). To perform state representation for MDP, we first converted user-item votings matrix to a binary matrix. Then we performed bi-clustering on this binary matrix to determine a subset of similar rows and columns. A bi-cluster merging algorithm is designed to merge similar and overlapping bi-clusters. These bi-clusters are then mapped to a squared grid (SG). RL is applied on this SG to determine best policy to give recommendation to users. Start state is determined using Improved Triangle Similarity (ITR similarity measure. Reward function is computed as grid state overlapping in terms of users and items in current and prospective next state. A thorough comparative analysis was conducted, encompassing a diverse array of methodologies, including RL-based, pure Collaborative Filtering (CF), and clustering methods. The results demonstrate that our proposed method outperforms its competitors in terms of precision, recall, and optimal policy learning.} 1331 1331 abstract = {Collaborative filtering (CF) recommender systems are static in nature and does not adapt well with changing user preferences. User preferences may change after interaction with a system or after buying a product. Conventional CF clustering algorithms only identifies the distribution of patterns and hidden correlations globally. However, the impossibility of discovering local patterns by these algorithms, headed to the popularization of bi-clustering algorithms. Bi-clustering algorithms can analyze all dataset dimensions simultaneously and consequently, discover local patterns that deliver a better understanding of the underlying hidden correlations. In this paper, we modelled the recommendation problem as a sequential decision-making problem using Markov Decision Processes (MDP). To perform state representation for MDP, we first converted user-item votings matrix to a binary matrix. Then we performed bi-clustering on this binary matrix to determine a subset of similar rows and columns. A bi-cluster merging algorithm is designed to merge similar and overlapping bi-clusters. These bi-clusters are then mapped to a squared grid (SG). RL is applied on this SG to determine best policy to give recommendation to users. Start state is determined using Improved Triangle Similarity (ITR similarity measure. Reward function is computed as grid state overlapping in terms of users and items in current and prospective next state. A thorough comparative analysis was conducted, encompassing a diverse array of methodologies, including RL-based, pure Collaborative Filtering (CF), and clustering methods. The results demonstrate that our proposed method outperforms its competitors in terms of precision, recall, and optimal policy learning.}
} 1332 1332 }
1333 1333
@article{Soto2, 1334 1334 @article{Soto2,
author={Soto-Forero, Daniel and Ackermann, Simha and Betbeder, Marie-Laure and Henriet, Julien}, 1335 1335 author={Soto-Forero, Daniel and Ackermann, Simha and Betbeder, Marie-Laure and Henriet, Julien},
title={Automatic Real-Time Adaptation of Training Session Difficulty Using Rules and Reinforcement Learning in the AI-VT ITS}, 1336 1336 title={Automatic Real-Time Adaptation of Training Session Difficulty Using Rules and Reinforcement Learning in the AI-VT ITS},
journal = {International Journal of Modern Education and Computer Science(IJMECS)}, 1337 1337 journal = {International Journal of Modern Education and Computer Science(IJMECS)},
volume = {16}, 1338 1338 volume = {16},
pages = {56-71}, 1339 1339 pages = {56-71},
year = {2024}, 1340 1340 year = {2024},
issn = {2075-0161}, 1341 1341 issn = {2075-0161},
doi = { https://doi.org/10.5815/ijmecs.2024.03.05}, 1342 1342 doi = { https://doi.org/10.5815/ijmecs.2024.03.05},
url = {https://www.mecs-press.org/ijmecs/ijmecs-v16-n3/v16n3-5.html}, 1343 1343 url = {https://www.mecs-press.org/ijmecs/ijmecs-v16-n3/v16n3-5.html},
keywords={Real Time Adaptation, Intelligent Training System, Thompson Sampling, Case-Based Reasoning, Automatic Adaptation}, 1344 1344 keywords={Real Time Adaptation, Intelligent Training System, Thompson Sampling, Case-Based Reasoning, Automatic Adaptation},
abstract={Some of the most common and typical issues in the field of intelligent tutoring systems (ITS) are (i) the correct identification of learners’ difficulties in the learning process, (ii) the adaptation of content or presentation of the system according to the difficulties encountered, and (iii) the ability to adapt without initial data (cold-start). In some cases, the system tolerates modifications after the realization and assessment of competences. Other systems require complicated real-time adaptation since only a limited number of data can be captured. In that case, it must be analyzed properly and with a certain precision in order to obtain the appropriate adaptations. Generally, for the adaptation step, the ITS gathers common learners together and adapts their training similarly. Another type of adaptation is more personalized, but requires acquired or estimated information about each learner (previous grades, probability of success, etc.). Some of these parameters may be difficult to obtain, and others are imprecise and can lead to misleading adaptations. The adaptation using machine learning requires prior training with a lot of data. This article presents a model for the real time automatic adaptation of a predetermined session inside an ITS called AI-VT. This adaptation process is part of a case-based reasoning global model. The characteristics of the model proposed in this paper (i) require a limited number of data in order to generate a personalized adaptation, (ii) do not require training, (iii) are based on the correlation to complexity levels, and (iv) are able to adapt even at the cold-start stage. The proposed model is presented with two different configurations, deterministic and stochastic. The model has been tested with a database of 1000 learners, corresponding to different knowledge levels in three different scenarios. The results show the dynamic adaptation of the proposed model in both versions, with the adaptations obtained helping the system to evolve more rapidly and identify learner weaknesses in the different levels of complexity as well as the generation of pertinent recommendations in specific cases for each learner capacity.} 1345 1345 abstract={Some of the most common and typical issues in the field of intelligent tutoring systems (ITS) are (i) the correct identification of learners’ difficulties in the learning process, (ii) the adaptation of content or presentation of the system according to the difficulties encountered, and (iii) the ability to adapt without initial data (cold-start). In some cases, the system tolerates modifications after the realization and assessment of competences. Other systems require complicated real-time adaptation since only a limited number of data can be captured. In that case, it must be analyzed properly and with a certain precision in order to obtain the appropriate adaptations. Generally, for the adaptation step, the ITS gathers common learners together and adapts their training similarly. Another type of adaptation is more personalized, but requires acquired or estimated information about each learner (previous grades, probability of success, etc.). Some of these parameters may be difficult to obtain, and others are imprecise and can lead to misleading adaptations. The adaptation using machine learning requires prior training with a lot of data. This article presents a model for the real time automatic adaptation of a predetermined session inside an ITS called AI-VT. This adaptation process is part of a case-based reasoning global model. The characteristics of the model proposed in this paper (i) require a limited number of data in order to generate a personalized adaptation, (ii) do not require training, (iii) are based on the correlation to complexity levels, and (iv) are able to adapt even at the cold-start stage. The proposed model is presented with two different configurations, deterministic and stochastic. The model has been tested with a database of 1000 learners, corresponding to different knowledge levels in three different scenarios. The results show the dynamic adaptation of the proposed model in both versions, with the adaptations obtained helping the system to evolve more rapidly and identify learner weaknesses in the different levels of complexity as well as the generation of pertinent recommendations in specific cases for each learner capacity.}
} 1346 1346 }
1347 1347
@InProceedings{10.1007/978-3-031-63646-2_11 , 1348 1348 @InProceedings{10.1007/978-3-031-63646-2_11 ,
author={Soto-Forero, Daniel and Betbeder, Marie-Laure and Henriet, Julien}, 1349 1349 author={Soto-Forero, Daniel and Betbeder, Marie-Laure and Henriet, Julien},
editor={Recio-Garcia, Juan A. and Orozco-del-Castillo, Mauricio G. and Bridge, Derek}, 1350 1350 editor={Recio-Garcia, Juan A. and Orozco-del-Castillo, Mauricio G. and Bridge, Derek},
title={Ensemble Stacking Case-Based Reasoning for Regression}, 1351 1351 title={Ensemble Stacking Case-Based Reasoning for Regression},
booktitle={Case-Based Reasoning Research and Development}, 1352 1352 booktitle={Case-Based Reasoning Research and Development},
year={2024}, 1353 1353 year={2024},
publisher={Springer Nature Switzerland}, 1354 1354 publisher={Springer Nature Switzerland},
address={Cham}, 1355 1355 address={Cham},
pages={159--174}, 1356 1356 pages={159--174},
abstract={This paper presents a case-based reasoning algorithm with a two-stage iterative double stacking to find approximate solutions to one and multidimensional regression problems. This approach does not require training, so it can work with dynamic data at run time. The solutions are generated using stochastic algorithms in order to allow exploration of the solution space. The evaluation is performed by transforming the regression problem into an optimization problem with an associated objective function. The algorithm has been tested in comparison with nine classical regression algorithms on ten different regression databases extracted from the UCI site. The results show that the proposed algorithm generates solutions in most cases quite close to the real solutions. According to the RMSE, the proposed algorithm globally among the four best algorithms, according to MAE, to the fourth best algorithms of the ten evaluated, suggesting that the results are reasonably good.}, 1357 1357 abstract={This paper presents a case-based reasoning algorithm with a two-stage iterative double stacking to find approximate solutions to one and multidimensional regression problems. This approach does not require training, so it can work with dynamic data at run time. The solutions are generated using stochastic algorithms in order to allow exploration of the solution space. The evaluation is performed by transforming the regression problem into an optimization problem with an associated objective function. The algorithm has been tested in comparison with nine classical regression algorithms on ten different regression databases extracted from the UCI site. The results show that the proposed algorithm generates solutions in most cases quite close to the real solutions. According to the RMSE, the proposed algorithm globally among the four best algorithms, according to MAE, to the fourth best algorithms of the ten evaluated, suggesting that the results are reasonably good.},
isbn={978-3-031-63646-2} 1358 1358 isbn={978-3-031-63646-2}
} 1359 1359 }
1360 1360
@article{ZHANG2018189, 1361 1361 @article{ZHANG2018189,
title = {A three learning states Bayesian knowledge tracing model}, 1362 1362 title = {A three learning states Bayesian knowledge tracing model},
journal = {Knowledge-Based Systems}, 1363 1363 journal = {Knowledge-Based Systems},
volume = {148}, 1364 1364 volume = {148},
pages = {189-201}, 1365 1365 pages = {189-201},
year = {2018}, 1366 1366 year = {2018},
issn = {0950-7051}, 1367 1367 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2018.03.001}, 1368 1368 doi = {https://doi.org/10.1016/j.knosys.2018.03.001},
url = {https://www.sciencedirect.com/science/article/pii/S0950705118301199}, 1369 1369 url = {https://www.sciencedirect.com/science/article/pii/S0950705118301199},
author = {Kai Zhang and Yiyu Yao}, 1370 1370 author = {Kai Zhang and Yiyu Yao},
keywords = {Bayesian knowledge tracing, Three-way decisions}, 1371 1371 keywords = {Bayesian knowledge tracing, Three-way decisions},
abstract = {This paper proposes a Bayesian knowledge tracing model with three learning states by extending the original two learning states. We divide a learning process into three sections by using an evaluation function for three-way decisions. Advantages of such a trisection over traditional bisection are demonstrated by comparative experiments. We develop a three learning states model based on the trisection of the learning process. We apply the model to a series of comparative experiments with the original model. Qualitative and quantitative analyses of the experimental results indicate the superior performance of the proposed model over the original model in terms of prediction accuracies and related statistical measures.} 1372 1372 abstract = {This paper proposes a Bayesian knowledge tracing model with three learning states by extending the original two learning states. We divide a learning process into three sections by using an evaluation function for three-way decisions. Advantages of such a trisection over traditional bisection are demonstrated by comparative experiments. We develop a three learning states model based on the trisection of the learning process. We apply the model to a series of comparative experiments with the original model. Qualitative and quantitative analyses of the experimental results indicate the superior performance of the proposed model over the original model in terms of prediction accuracies and related statistical measures.}
} 1373 1373 }
1374 1374
@article{Li_2024, 1375 1375 @article{Li_2024,
doi = {10.3847/1538-4357/ad3215}, 1376 1376 doi = {10.3847/1538-4357/ad3215},
url = {https://dx.doi.org/10.3847/1538-4357/ad3215}, 1377 1377 url = {https://dx.doi.org/10.3847/1538-4357/ad3215},
year = {2024}, 1378 1378 year = {2024},
month = {apr}, 1379 1379 month = {apr},
publisher = {The American Astronomical Society}, 1380 1380 publisher = {The American Astronomical Society},
volume = {965}, 1381 1381 volume = {965},
number = {2}, 1382 1382 number = {2},
pages = {125}, 1383 1383 pages = {125},
author = {Zhigang Li and Zhejie Ding and Yu Yu and Pengjie Zhang}, 1384 1384 author = {Zhigang Li and Zhejie Ding and Yu Yu and Pengjie Zhang},
title = {The Kullback–Leibler Divergence and the Convergence Rate of Fast Covariance Matrix Estimators in Galaxy Clustering Analysis}, 1385 1385 title = {The Kullback–Leibler Divergence and the Convergence Rate of Fast Covariance Matrix Estimators in Galaxy Clustering Analysis},
journal = {The Astrophysical Journal}, 1386 1386 journal = {The Astrophysical Journal},
abstract = {We present a method to quantify the convergence rate of the fast estimators of the covariance matrices in the large-scale structure analysis. Our method is based on the Kullback–Leibler (KL) divergence, which describes the relative entropy of two probability distributions. As a case study, we analyze the delete-d jackknife estimator for the covariance matrix of the galaxy correlation function. We introduce the information factor or the normalized KL divergence with the help of a set of baseline covariance matrices to diagnose the information contained in the jackknife covariance matrix. Using a set of quick particle mesh mock catalogs designed for the Baryon Oscillation Spectroscopic Survey DR11 CMASS galaxy survey, we find that the jackknife resampling method succeeds in recovering the covariance matrix with 10 times fewer simulation mocks than that of the baseline method at small scales (s ≤ 40 h −1 Mpc). However, the ability to reduce the number of mock catalogs is degraded at larger scales due to the increasing bias on the jackknife covariance matrix. Note that the analysis in this paper can be applied to any fast estimator of the covariance matrix for galaxy clustering measurements.} 1387 1387 abstract = {We present a method to quantify the convergence rate of the fast estimators of the covariance matrices in the large-scale structure analysis. Our method is based on the Kullback–Leibler (KL) divergence, which describes the relative entropy of two probability distributions. As a case study, we analyze the delete-d jackknife estimator for the covariance matrix of the galaxy correlation function. We introduce the information factor or the normalized KL divergence with the help of a set of baseline covariance matrices to diagnose the information contained in the jackknife covariance matrix. Using a set of quick particle mesh mock catalogs designed for the Baryon Oscillation Spectroscopic Survey DR11 CMASS galaxy survey, we find that the jackknife resampling method succeeds in recovering the covariance matrix with 10 times fewer simulation mocks than that of the baseline method at small scales (s ≤ 40 h −1 Mpc). However, the ability to reduce the number of mock catalogs is degraded at larger scales due to the increasing bias on the jackknife covariance matrix. Note that the analysis in this paper can be applied to any fast estimator of the covariance matrix for galaxy clustering measurements.}
} 1388 1388 }
1389 1389
@Article{Kim2024, 1390 1390 @Article{Kim2024,
author={Kim, Wonjik}, 1391 1391 author={Kim, Wonjik},
title={A Random Focusing Method with Jensen--Shannon Divergence for Improving Deep Neural Network Performance Ensuring Architecture Consistency}, 1392 1392 title={A Random Focusing Method with Jensen--Shannon Divergence for Improving Deep Neural Network Performance Ensuring Architecture Consistency},
journal={Neural Processing Letters}, 1393 1393 journal={Neural Processing Letters},
year={2024}, 1394 1394 year={2024},
month={Jun}, 1395 1395 month={Jun},
day={17}, 1396 1396 day={17},
volume={56}, 1397 1397 volume={56},
number={4}, 1398 1398 number={4},
pages={199}, 1399 1399 pages={199},
abstract={Multiple hidden layers in deep neural networks perform non-linear transformations, enabling the extraction of meaningful features and the identification of relationships between input and output data. However, the gap between the training and real-world data can result in network overfitting, prompting the exploration of various preventive methods. The regularization technique called 'dropout' is widely used for deep learning models to improve the training of robust and generalized features. During the training phase with dropout, neurons in a particular layer are randomly selected to be ignored for each input. This random exclusion of neurons encourages the network to depend on different subsets of neurons at different times, fostering robustness and reducing sensitivity to specific neurons. This study introduces a novel approach called random focusing, departing from complete neuron exclusion in dropout. The proposed random focusing selectively highlights random neurons during training, aiming for a smoother transition between training and inference phases while keeping network architecture consistent. This study also incorporates Jensen--Shannon Divergence to enhance the stability and efficacy of the random focusing method. Experimental validation across tasks like image classification and semantic segmentation demonstrates the adaptability of the proposed methods across different network architectures, including convolutional neural networks and transformers.}, 1400 1400 abstract={Multiple hidden layers in deep neural networks perform non-linear transformations, enabling the extraction of meaningful features and the identification of relationships between input and output data. However, the gap between the training and real-world data can result in network overfitting, prompting the exploration of various preventive methods. The regularization technique called 'dropout' is widely used for deep learning models to improve the training of robust and generalized features. During the training phase with dropout, neurons in a particular layer are randomly selected to be ignored for each input. This random exclusion of neurons encourages the network to depend on different subsets of neurons at different times, fostering robustness and reducing sensitivity to specific neurons. This study introduces a novel approach called random focusing, departing from complete neuron exclusion in dropout. The proposed random focusing selectively highlights random neurons during training, aiming for a smoother transition between training and inference phases while keeping network architecture consistent. This study also incorporates Jensen--Shannon Divergence to enhance the stability and efficacy of the random focusing method. Experimental validation across tasks like image classification and semantic segmentation demonstrates the adaptability of the proposed methods across different network architectures, including convolutional neural networks and transformers.},
issn={1573-773X}, 1401 1401 issn={1573-773X},
doi={10.1007/s11063-024-11668-z}, 1402 1402 doi={10.1007/s11063-024-11668-z},
url={https://doi.org/10.1007/s11063-024-11668-z} 1403 1403 url={https://doi.org/10.1007/s11063-024-11668-z}
} 1404 1404 }
1405 1405
@InProceedings{pmlr-v238-ou24a, 1406 1406 @InProceedings{pmlr-v238-ou24a,
title = {Thompson Sampling Itself is Differentially Private}, 1407 1407 title = {Thompson Sampling Itself is Differentially Private},
author = {Ou, Tingting and Cummings, Rachel and Avella Medina, Marco}, 1408 1408 author = {Ou, Tingting and Cummings, Rachel and Avella Medina, Marco},
booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, 1409 1409 booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics},
pages = {1576--1584}, 1410 1410 pages = {1576--1584},
year = {2024}, 1411 1411 year = {2024},
editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, 1412 1412 editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen},
volume = {238}, 1413 1413 volume = {238},
series = {Proceedings of Machine Learning Research}, 1414 1414 series = {Proceedings of Machine Learning Research},
month = {02--04 May}, 1415 1415 month = {02--04 May},
publisher = {PMLR}, 1416 1416 publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v238/ou24a/ou24a.pdf}, 1417 1417 pdf = {https://proceedings.mlr.press/v238/ou24a/ou24a.pdf},
url = {https://proceedings.mlr.press/v238/ou24a.html}, 1418 1418 url = {https://proceedings.mlr.press/v238/ou24a.html},
abstract = {In this work we first show that the classical Thompson sampling algorithm for multi-arm bandits is differentially private as-is, without any modification. We provide per-round privacy guarantees as a function of problem parameters and show composition over $T$ rounds; since the algorithm is unchanged, existing $O(\sqrt{NT\log N})$ regret bounds still hold and there is no loss in performance due to privacy. We then show that simple modifications – such as pre-pulling all arms a fixed number of times, increasing the sampling variance – can provide tighter privacy guarantees. We again provide privacy guarantees that now depend on the new parameters introduced in the modification, which allows the analyst to tune the privacy guarantee as desired. We also provide a novel regret analysis for this new algorithm, and show how the new parameters also impact expected regret. Finally, we empirically validate and illustrate our theoretical findings in two parameter regimes and demonstrate that tuning the new parameters substantially improve the privacy-regret tradeoff.} 1419 1419 abstract = {In this work we first show that the classical Thompson sampling algorithm for multi-arm bandits is differentially private as-is, without any modification. We provide per-round privacy guarantees as a function of problem parameters and show composition over $T$ rounds; since the algorithm is unchanged, existing $O(\sqrt{NT\log N})$ regret bounds still hold and there is no loss in performance due to privacy. We then show that simple modifications – such as pre-pulling all arms a fixed number of times, increasing the sampling variance – can provide tighter privacy guarantees. We again provide privacy guarantees that now depend on the new parameters introduced in the modification, which allows the analyst to tune the privacy guarantee as desired. We also provide a novel regret analysis for this new algorithm, and show how the new parameters also impact expected regret. Finally, we empirically validate and illustrate our theoretical findings in two parameter regimes and demonstrate that tuning the new parameters substantially improve the privacy-regret tradeoff.}
} 1420 1420 }
1421 1421
@Article{math12111758, 1422 1422 @Article{math12111758,
AUTHOR = {Uguina, Antonio R. and Gomez, Juan F. and Panadero, Javier and Martínez-Gavara, Anna and Juan, Angel A.}, 1423 1423 AUTHOR = {Uguina, Antonio R. and Gomez, Juan F. and Panadero, Javier and Martínez-Gavara, Anna and Juan, Angel A.},
TITLE = {A Learnheuristic Algorithm Based on Thompson Sampling for the Heterogeneous and Dynamic Team Orienteering Problem}, 1424 1424 TITLE = {A Learnheuristic Algorithm Based on Thompson Sampling for the Heterogeneous and Dynamic Team Orienteering Problem},
JOURNAL = {Mathematics}, 1425 1425 JOURNAL = {Mathematics},
VOLUME = {12}, 1426 1426 VOLUME = {12},
YEAR = {2024}, 1427 1427 YEAR = {2024},
NUMBER = {11}, 1428 1428 NUMBER = {11},
ARTICLE-NUMBER = {1758}, 1429 1429 ARTICLE-NUMBER = {1758},
URL = {https://www.mdpi.com/2227-7390/12/11/1758}, 1430 1430 URL = {https://www.mdpi.com/2227-7390/12/11/1758},
ISSN = {2227-7390}, 1431 1431 ISSN = {2227-7390},
ABSTRACT = {The team orienteering problem (TOP) is a well-studied optimization challenge in the field of Operations Research, where multiple vehicles aim to maximize the total collected rewards within a given time limit by visiting a subset of nodes in a network. With the goal of including dynamic and uncertain conditions inherent in real-world transportation scenarios, we introduce a novel dynamic variant of the TOP that considers real-time changes in environmental conditions affecting reward acquisition at each node. Specifically, we model the dynamic nature of environmental factors—such as traffic congestion, weather conditions, and battery level of each vehicle—to reflect their impact on the probability of obtaining the reward when visiting each type of node in a heterogeneous network. To address this problem, a learnheuristic optimization framework is proposed. It combines a metaheuristic algorithm with Thompson sampling to make informed decisions in dynamic environments. Furthermore, we conduct empirical experiments to assess the impact of varying reward probabilities on resource allocation and route planning within the context of this dynamic TOP, where nodes might offer a different reward behavior depending upon the environmental conditions. Our numerical results indicate that the proposed learnheuristic algorithm outperforms static approaches, achieving up to 25% better performance in highly dynamic scenarios. Our findings highlight the effectiveness of our approach in adapting to dynamic conditions and optimizing decision-making processes in transportation systems.}, 1432 1432 ABSTRACT = {The team orienteering problem (TOP) is a well-studied optimization challenge in the field of Operations Research, where multiple vehicles aim to maximize the total collected rewards within a given time limit by visiting a subset of nodes in a network. With the goal of including dynamic and uncertain conditions inherent in real-world transportation scenarios, we introduce a novel dynamic variant of the TOP that considers real-time changes in environmental conditions affecting reward acquisition at each node. Specifically, we model the dynamic nature of environmental factors—such as traffic congestion, weather conditions, and battery level of each vehicle—to reflect their impact on the probability of obtaining the reward when visiting each type of node in a heterogeneous network. To address this problem, a learnheuristic optimization framework is proposed. It combines a metaheuristic algorithm with Thompson sampling to make informed decisions in dynamic environments. Furthermore, we conduct empirical experiments to assess the impact of varying reward probabilities on resource allocation and route planning within the context of this dynamic TOP, where nodes might offer a different reward behavior depending upon the environmental conditions. Our numerical results indicate that the proposed learnheuristic algorithm outperforms static approaches, achieving up to 25% better performance in highly dynamic scenarios. Our findings highlight the effectiveness of our approach in adapting to dynamic conditions and optimizing decision-making processes in transportation systems.},
DOI = {10.3390/math12111758} 1433 1433 DOI = {10.3390/math12111758}
} 1434 1434 }
1435 1435
@inproceedings{NEURIPS2023_9d8cf124, 1436 1436 @inproceedings{NEURIPS2023_9d8cf124,
author = {Abel, David and Barreto, Andre and Van Roy, Benjamin and Precup, Doina and van Hasselt, Hado P and Singh, Satinder}, 1437 1437 author = {Abel, David and Barreto, Andre and Van Roy, Benjamin and Precup, Doina and van Hasselt, Hado P and Singh, Satinder},
booktitle = {Advances in Neural Information Processing Systems}, 1438 1438 booktitle = {Advances in Neural Information Processing Systems},
editor = {A. Oh and T. Naumann and A. Globerson and K. Saenko and M. Hardt and S. Levine}, 1439 1439 editor = {A. Oh and T. Naumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
pages = {50377--50407}, 1440 1440 pages = {50377--50407},
publisher = {Curran Associates, Inc.}, 1441 1441 publisher = {Curran Associates, Inc.},
title = {A Definition of Continual Reinforcement Learning}, 1442 1442 title = {A Definition of Continual Reinforcement Learning},
url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/9d8cf1247786d6dfeefeeb53b8b5f6d7-Paper-Conference.pdf}, 1443 1443 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/9d8cf1247786d6dfeefeeb53b8b5f6d7-Paper-Conference.pdf},
volume = {36}, 1444 1444 volume = {36},
year = {2023} 1445 1445 year = {2023}
} 1446 1446 }
1447 1447
@article{NGUYEN2024111566, 1448 1448 @article{NGUYEN2024111566,
title = {Dynamic metaheuristic selection via Thompson Sampling for online optimization}, 1449 1449 title = {Dynamic metaheuristic selection via Thompson Sampling for online optimization},
journal = {Applied Soft Computing}, 1450 1450 journal = {Applied Soft Computing},
volume = {158}, 1451 1451 volume = {158},
pages = {111566}, 1452 1452 pages = {111566},
year = {2024}, 1453 1453 year = {2024},
issn = {1568-4946}, 1454 1454 issn = {1568-4946},
doi = {https://doi.org/10.1016/j.asoc.2024.111566}, 1455 1455 doi = {https://doi.org/10.1016/j.asoc.2024.111566},
url = {https://www.sciencedirect.com/science/article/pii/S1568494624003405}, 1456 1456 url = {https://www.sciencedirect.com/science/article/pii/S1568494624003405},
author = {Alain Nguyen}, 1457 1457 author = {Alain Nguyen},
keywords = {Selection hyper-heuristic, Multi-armed-bandit, Thompson Sampling, Online optimization}, 1458 1458 keywords = {Selection hyper-heuristic, Multi-armed-bandit, Thompson Sampling, Online optimization},
abstract = {It is acknowledged that no single heuristic can outperform all the others in every optimization problem. This has given rise to hyper-heuristic methods for providing solutions to a wider range of problems. In this work, a set of five non-competing low-level heuristics is proposed in a hyper-heuristic framework. The multi-armed bandit problem analogy is efficiently leveraged and Thompson Sampling is used to actively select the best heuristic for online optimization. The proposed method is compared against ten population-based metaheuristic algorithms on the well-known CEC’05 optimizing benchmark consisting of 23 functions of various landscapes. The results show that the proposed algorithm is the only one able to find the global minimum of all functions with remarkable consistency.} 1459 1459 abstract = {It is acknowledged that no single heuristic can outperform all the others in every optimization problem. This has given rise to hyper-heuristic methods for providing solutions to a wider range of problems. In this work, a set of five non-competing low-level heuristics is proposed in a hyper-heuristic framework. The multi-armed bandit problem analogy is efficiently leveraged and Thompson Sampling is used to actively select the best heuristic for online optimization. The proposed method is compared against ten population-based metaheuristic algorithms on the well-known CEC’05 optimizing benchmark consisting of 23 functions of various landscapes. The results show that the proposed algorithm is the only one able to find the global minimum of all functions with remarkable consistency.}
} 1460 1460 }
1461 1461
@Article{Malladi2024, 1462 1462 @Article{Malladi2024,
author={Malladi, Rama K.}, 1463 1463 author={Malladi, Rama K.},
title={Application of Supervised Machine Learning Techniques to Forecast the COVID-19 U.S. Recession and Stock Market Crash}, 1464 1464 title={Application of Supervised Machine Learning Techniques to Forecast the COVID-19 U.S. Recession and Stock Market Crash},
journal={Computational Economics}, 1465 1465 journal={Computational Economics},
year={2024}, 1466 1466 year={2024},
month={Mar}, 1467 1467 month={Mar},
day={01}, 1468 1468 day={01},
volume={63}, 1469 1469 volume={63},
number={3}, 1470 1470 number={3},
pages={1021-1045}, 1471 1471 pages={1021-1045},
abstract={Machine learning (ML), a transformational technology, has been successfully applied to forecasting events down the road. This paper demonstrates that supervised ML techniques can be used in recession and stock market crash (more than 20{\%} drawdown) forecasting. After learning from strictly past monthly data, ML algorithms detected the Covid-19 recession by December 2019, six months before the official NBER announcement. Moreover, ML algorithms foresaw the March 2020 S{\&}P500 crash two months before it happened. The current labor market and housing are harbingers of a future U.S. recession (in 3 months). Financial factors have a bigger role to play in stock market crashes than economic factors. The labor market appears as a top-two feature in predicting both recessions and crashes. ML algorithms detect that the U.S. exited recession before December 2020, even though the official NBER announcement has not yet been made. They also do not anticipate a U.S. stock market crash before March 2021. ML methods have three times higher false discovery rates of recessions compared to crashes.}, 1472 1472 abstract={Machine learning (ML), a transformational technology, has been successfully applied to forecasting events down the road. This paper demonstrates that supervised ML techniques can be used in recession and stock market crash (more than 20{\%} drawdown) forecasting. After learning from strictly past monthly data, ML algorithms detected the Covid-19 recession by December 2019, six months before the official NBER announcement. Moreover, ML algorithms foresaw the March 2020 S{\&}P500 crash two months before it happened. The current labor market and housing are harbingers of a future U.S. recession (in 3 months). Financial factors have a bigger role to play in stock market crashes than economic factors. The labor market appears as a top-two feature in predicting both recessions and crashes. ML algorithms detect that the U.S. exited recession before December 2020, even though the official NBER announcement has not yet been made. They also do not anticipate a U.S. stock market crash before March 2021. ML methods have three times higher false discovery rates of recessions compared to crashes.},
issn={1572-9974}, 1473 1473 issn={1572-9974},
doi={10.1007/s10614-022-10333-8}, 1474 1474 doi={10.1007/s10614-022-10333-8},
url={https://doi.org/10.1007/s10614-022-10333-8} 1475 1475 url={https://doi.org/10.1007/s10614-022-10333-8}
} 1476 1476 }
1477 1477
@INPROCEEDINGS{10493943, 1478 1478 @INPROCEEDINGS{10493943,
author={Raaa Subha and Naaa Gayathri and Saaa Sasireka and Raaa Sathiyabanu and Baaa Santhiyaa and Baaa Varshini}, 1479 1479 author={Raaa Subha and Naaa Gayathri and Saaa Sasireka and Raaa Sathiyabanu and Baaa Santhiyaa and Baaa Varshini},
booktitle={2024 5th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI)}, 1480 1480 booktitle={2024 5th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI)},
title={Intelligent Tutoring Systems using Long Short-Term Memory Networks and Bayesian Knowledge Tracing}, 1481 1481 title={Intelligent Tutoring Systems using Long Short-Term Memory Networks and Bayesian Knowledge Tracing},
year={2024}, 1482 1482 year={2024},
volume={0}, 1483 1483 volume={0},
number={0}, 1484 1484 number={0},
pages={24-29}, 1485 1485 pages={24-29},
keywords={Knowledge engineering;Filtering;Estimation;Transforms;Real-time systems;Bayes methods;Problem-solving;Intelligent Tutoring System (ITS);Long Short-Term Memory (LSTM);Bayesian Knowledge Tracing (BKT);Reinforcement Learning}, 1486 1486 keywords={Knowledge engineering;Filtering;Estimation;Transforms;Real-time systems;Bayes methods;Problem-solving;Intelligent Tutoring System (ITS);Long Short-Term Memory (LSTM);Bayesian Knowledge Tracing (BKT);Reinforcement Learning},
doi={10.1109/ICMCSI61536.2024.00010} 1487 1487 doi={10.1109/ICMCSI61536.2024.00010}
} 1488 1488 }
1489 1489
@article{https://doi.org/10.1155/2024/4067721, 1490 1490 @article{https://doi.org/10.1155/2024/4067721,
author = {Ahmed, Esmael}, 1491 1491 author = {Ahmed, Esmael},
title = {Student Performance Prediction Using Machine Learning Algorithms}, 1492 1492 title = {Student Performance Prediction Using Machine Learning Algorithms},
journal = {Applied Computational Intelligence and Soft Computing}, 1493 1493 journal = {Applied Computational Intelligence and Soft Computing},
volume = {2024}, 1494 1494 volume = {2024},
number = {1}, 1495 1495 number = {1},
pages = {4067721}, 1496 1496 pages = {4067721},
doi = {https://doi.org/10.1155/2024/4067721}, 1497 1497 doi = {https://doi.org/10.1155/2024/4067721},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2024/4067721}, 1498 1498 url = {https://onlinelibrary.wiley.com/doi/abs/10.1155/2024/4067721},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2024/4067721}, 1499 1499 eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1155/2024/4067721},
abstract = {Education is crucial for a productive life and providing necessary resources. With the advent of technology like artificial intelligence, higher education institutions are incorporating technology into traditional teaching methods. Predicting academic success has gained interest in education as a strong academic record improves a university’s ranking and increases student employment opportunities. Modern learning institutions face challenges in analyzing performance, providing high-quality education, formulating strategies for evaluating students’ performance, and identifying future needs. E-learning is a rapidly growing and advanced form of education, where students enroll in online courses. Platforms like Intelligent Tutoring Systems (ITS), learning management systems (LMS), and massive open online courses (MOOC) use educational data mining (EDM) to develop automatic grading systems, recommenders, and adaptative systems. However, e-learning is still considered a challenging learning environment due to the lack of direct interaction between students and course instructors. Machine learning (ML) is used in developing adaptive intelligent systems that can perform complex tasks beyond human abilities. Some areas of applications of ML algorithms include cluster analysis, pattern recognition, image processing, natural language processing, and medical diagnostics. In this research work, K-means, a clustering data mining technique using Davies’ Bouldin method, obtains clusters to find important features affecting students’ performance. The study found that the SVM algorithm had the best prediction results after parameter adjustment, with a 96\% accuracy rate. In this paper, the researchers have examined the functions of the Support Vector Machine, Decision Tree, naive Bayes, and KNN classifiers. The outcomes of parameter adjustment greatly increased the accuracy of the four prediction models. Naïve Bayes model’s prediction accuracy is the lowest when compared to other prediction methods, as it assumes a strong independent relationship between features.}, 1500 1500 abstract = {Education is crucial for a productive life and providing necessary resources. With the advent of technology like artificial intelligence, higher education institutions are incorporating technology into traditional teaching methods. Predicting academic success has gained interest in education as a strong academic record improves a university’s ranking and increases student employment opportunities. Modern learning institutions face challenges in analyzing performance, providing high-quality education, formulating strategies for evaluating students’ performance, and identifying future needs. E-learning is a rapidly growing and advanced form of education, where students enroll in online courses. Platforms like Intelligent Tutoring Systems (ITS), learning management systems (LMS), and massive open online courses (MOOC) use educational data mining (EDM) to develop automatic grading systems, recommenders, and adaptative systems. However, e-learning is still considered a challenging learning environment due to the lack of direct interaction between students and course instructors. Machine learning (ML) is used in developing adaptive intelligent systems that can perform complex tasks beyond human abilities. Some areas of applications of ML algorithms include cluster analysis, pattern recognition, image processing, natural language processing, and medical diagnostics. In this research work, K-means, a clustering data mining technique using Davies’ Bouldin method, obtains clusters to find important features affecting students’ performance. The study found that the SVM algorithm had the best prediction results after parameter adjustment, with a 96\% accuracy rate. In this paper, the researchers have examined the functions of the Support Vector Machine, Decision Tree, naive Bayes, and KNN classifiers. The outcomes of parameter adjustment greatly increased the accuracy of the four prediction models. Naïve Bayes model’s prediction accuracy is the lowest when compared to other prediction methods, as it assumes a strong independent relationship between features.},
year = {2024} 1501 1501 year = {2024}
} 1502 1502 }
1503 1503
@article{HAZEM, 1504 1504 @article{HAZEM,
author = {Hazem A. Alrakhawi and Nurullizam Jamiat and Samy S. Abu-Naser}, 1505 1505 author = {Hazem A. Alrakhawi and Nurullizam Jamiat and Samy S. Abu-Naser},
title = {Intelligent Tutoring Systems in education: A systematic review of usage, tools, effects and evaluation}, 1506 1506 title = {Intelligent Tutoring Systems in education: A systematic review of usage, tools, effects and evaluation},
journal = {Journal of Theoretical and Applied Information Technology}, 1507 1507 journal = {Journal of Theoretical and Applied Information Technology},
volume = {2023}, 1508 1508 volume = {2023},
number = {4}, 1509 1509 number = {4},
pages = {4067721}, 1510 1510 pages = {4067721},
doi = {}, 1511 1511 doi = {},
url = {}, 1512 1512 url = {},
abstract = {}, 1513 1513 abstract = {},
year = {2023} 1514 1514 year = {2023}
} 1515 1515 }
1516 1516
@Article{Liu2023, 1517 1517 @Article{Liu2023,
author={Liu, Mengchi 1518 1518 author={Liu, Mengchi
and Yu, Dongmei}, 1519 1519 and Yu, Dongmei},
title={Towards intelligent E-learning systems}, 1520 1520 title={Towards intelligent E-learning systems},
journal={Education and Information Technologies}, 1521 1521 journal={Education and Information Technologies},
year={2023}, 1522 1522 year={2023},
month={Jul}, 1523 1523 month={Jul},
day={01}, 1524 1524 day={01},
volume={28}, 1525 1525 volume={28},
number={7}, 1526 1526 number={7},
pages={7845-7876}, 1527 1527 pages={7845-7876},
abstract={The prevalence of e-learning systems has made educational resources more accessible, interactive and effective to learners without the geographic and temporal boundaries. However, as the number of users increases and the volume of data grows, current e-learning systems face some technical and pedagogical challenges. This paper provides a comprehensive review on the efforts of applying new information and communication technologies to improve e-learning services. We first systematically investigate current e-learning systems in terms of their classification, architecture, functions, challenges, and current trends. We then present a general architecture for big data based e-learning systems to meet the ever-growing demand for e-learning. We also describe how to use data generated in big data based e-learning systems to support more flexible and customized course delivery and personalized learning.}, 1528 1528 abstract={The prevalence of e-learning systems has made educational resources more accessible, interactive and effective to learners without the geographic and temporal boundaries. However, as the number of users increases and the volume of data grows, current e-learning systems face some technical and pedagogical challenges. This paper provides a comprehensive review on the efforts of applying new information and communication technologies to improve e-learning services. We first systematically investigate current e-learning systems in terms of their classification, architecture, functions, challenges, and current trends. We then present a general architecture for big data based e-learning systems to meet the ever-growing demand for e-learning. We also describe how to use data generated in big data based e-learning systems to support more flexible and customized course delivery and personalized learning.},
issn={1573-7608}, 1529 1529 issn={1573-7608},
doi={10.1007/s10639-022-11479-6}, 1530 1530 doi={10.1007/s10639-022-11479-6},
url={https://doi.org/10.1007/s10639-022-11479-6} 1531 1531 url={https://doi.org/10.1007/s10639-022-11479-6}
} 1532 1532 }
1533 1533
@InProceedings{10.1007/978-3-031-63646-2_13, 1534 1534 @InProceedings{10.1007/978-3-031-63646-2_13,
author="Soto-Forero, Daniel 1535 1535 author="Soto-Forero, Daniel
and Ackermann, Simha 1536 1536 and Ackermann, Simha
and Betbeder, Marie-Laure 1537 1537 and Betbeder, Marie-Laure
and Henriet, Julien", 1538 1538 and Henriet, Julien",
editor="Recio-Garcia, Juan A. 1539 1539 editor="Recio-Garcia, Juan A.
and Orozco-del-Castillo, Mauricio G. 1540 1540 and Orozco-del-Castillo, Mauricio G.
and Bridge, Derek", 1541 1541 and Bridge, Derek",
title="The Intelligent Tutoring System AI-VT with Case-Based Reasoning and Real Time Recommender Models", 1542 1542 title="The Intelligent Tutoring System AI-VT with Case-Based Reasoning and Real Time Recommender Models",
booktitle="Case-Based Reasoning Research and Development", 1543 1543 booktitle="Case-Based Reasoning Research and Development",
year="2024", 1544 1544 year="2024",
publisher="Springer Nature Switzerland", 1545 1545 publisher="Springer Nature Switzerland",
address="Cham", 1546 1546 address="Cham",
pages="191--205", 1547 1547 pages="191--205",
abstract="This paper presents a recommendation model coupled on an existing CBR system model through a new modular architecture designed to integrate multiple services in a learning system called AI-VT (Artificial Intelligence Training System). The recommendation model provides a semi-automatic review of the CBR, two variants of the recommendation model have been implemented: deterministic and stochastic. The model has been tested with 1000 simulated learners, and compared with an original CBR system and BKT (Bayesian Knowledge Tracing) recommender system. The results show that the proposed model identifies learners' weaknesses correctly and revises the content of the ITS (Intelligent Tutoring System) better than the original ITS with CBR. Compared to BKT, the results at each level of complexity are variable, but overall the proposed stochastic model obtains better results.", 1548 1548 abstract="This paper presents a recommendation model coupled on an existing CBR system model through a new modular architecture designed to integrate multiple services in a learning system called AI-VT (Artificial Intelligence Training System). The recommendation model provides a semi-automatic review of the CBR, two variants of the recommendation model have been implemented: deterministic and stochastic. The model has been tested with 1000 simulated learners, and compared with an original CBR system and BKT (Bayesian Knowledge Tracing) recommender system. The results show that the proposed model identifies learners' weaknesses correctly and revises the content of the ITS (Intelligent Tutoring System) better than the original ITS with CBR. Compared to BKT, the results at each level of complexity are variable, but overall the proposed stochastic model obtains better results.",
isbn="978-3-031-63646-2" 1549 1549 isbn="978-3-031-63646-2"
} 1550 1550 }
1551 1551
@article{doi:10.1137/23M1592420, 1552 1552 @article{doi:10.1137/23M1592420,
author = {Minsker, Stanislav and Strawn, Nate}, 1553 1553 author = {Minsker, Stanislav and Strawn, Nate},
title = {The Geometric Median and Applications to Robust Mean Estimation}, 1554 1554 title = {The Geometric Median and Applications to Robust Mean Estimation},
journal = {SIAM Journal on Mathematics of Data Science}, 1555 1555 journal = {SIAM Journal on Mathematics of Data Science},
volume = {6}, 1556 1556 volume = {6},
number = {2}, 1557 1557 number = {2},
pages = {504-533}, 1558 1558 pages = {504-533},
year = {2024}, 1559 1559 year = {2024},
doi = {10.1137/23M1592420}, 1560 1560 doi = {10.1137/23M1592420},
URL = { https://doi.org/10.1137/23M1592420}, 1561 1561 URL = { https://doi.org/10.1137/23M1592420},
eprint = {https://doi.org/10.1137/23M1592420}, 1562 1562 eprint = {https://doi.org/10.1137/23M1592420},
abstract = { Abstract.This paper is devoted to the statistical and numerical properties of the geometric median and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in \(\mathbb R^d\), and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension \(d\); (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a “local quadratic growth” condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments, including the application to estimation of mean values of log-returns for S\&P 500 data. } 1563 1563 abstract = { Abstract.This paper is devoted to the statistical and numerical properties of the geometric median and its applications to the problem of robust mean estimation via the median of means principle. Our main theoretical results include (a) an upper bound for the distance between the mean and the median for general absolutely continuous distributions in \(\mathbb R^d\), and examples of specific classes of distributions for which these bounds do not depend on the ambient dimension \(d\); (b) exponential deviation inequalities for the distance between the sample and the population versions of the geometric median, which again depend only on the trace-type quantities and not on the ambient dimension. As a corollary, we deduce improved bounds for the (geometric) median of means estimator that hold for large classes of heavy-tailed distributions. Finally, we address the error of numerical approximation, which is an important practical aspect of any statistical estimation procedure. We demonstrate that the objective function minimized by the geometric median satisfies a “local quadratic growth” condition that allows one to translate suboptimality bounds for the objective function to the corresponding bounds for the numerical approximation to the median itself and propose a simple stopping rule applicable to any optimization method which yields explicit error guarantees. We conclude with the numerical experiments, including the application to estimation of mean values of log-returns for S\&P 500 data. }
} 1564 1564 }
1565 1565
@article{lei2024analysis, 1566 1566 @article{lei2024analysis,
title={Analysis of Simpson’s Paradox and Its Applications}, 1567 1567 title={Analysis of Simpson’s Paradox and Its Applications},
author={Lei, Zhihao}, 1568 1568 author={Lei, Zhihao},
journal={Highlights in Science, Engineering and Technology}, 1569 1569 journal={Highlights in Science, Engineering and Technology},
volume={88}, 1570 1570 volume={88},
pages={357--362}, 1571 1571 pages={357--362},
year={2024} 1572 1572 year={2024}
} 1573 1573 }
1574 1574
@InProceedings{pmlr-v108-seznec20a, 1575 1575 @InProceedings{pmlr-v108-seznec20a,
title = {A single algorithm for both restless and rested rotting bandits}, 1576 1576 title = {A single algorithm for both restless and rested rotting bandits},
author = {Seznec, Julien and Menard, Pierre and Lazaric, Alessandro and Valko, Michal}, 1577 1577 author = {Seznec, Julien and Menard, Pierre and Lazaric, Alessandro and Valko, Michal},
booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, 1578 1578 booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics},
pages = {3784--3794}, 1579 1579 pages = {3784--3794},
year = {2020}, 1580 1580 year = {2020},
editor = {Chiappa, Silvia and Calandra, Roberto}, 1581 1581 editor = {Chiappa, Silvia and Calandra, Roberto},
volume = {108}, 1582 1582 volume = {108},
series = {Proceedings of Machine Learning Research}, 1583 1583 series = {Proceedings of Machine Learning Research},
month = {26--28 Aug}, 1584 1584 month = {26--28 Aug},
publisher = {PMLR}, 1585 1585 publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v108/seznec20a/seznec20a.pdf}, 1586 1586 pdf = {http://proceedings.mlr.press/v108/seznec20a/seznec20a.pdf},
url = {https://proceedings.mlr.press/v108/seznec20a.html}, 1587 1587 url = {https://proceedings.mlr.press/v108/seznec20a.html},
abstract = {In many application domains (e.g., recommender systems, intelligent tutoring systems), the rewards associated to the available actions tend to decrease over time. This decay is either caused by the actions executed in the past (e.g., a user may get bored when songs of the same genre are recommended over and over) or by an external factor (e.g., content becomes outdated). These two situations can be modeled as specific instances of the rested and restless bandit settings, where arms are rotting (i.e., their value decrease over time). These problems were thought to be significantly different, since Levine et al. (2017) showed that state-of-the-art algorithms for restless bandit perform poorly in the rested rotting setting. In this paper, we introduce a novel algorithm, Rotting Adaptive Window UCB (RAW-UCB), that achieves near-optimal regret in both rotting rested and restless bandit, without any prior knowledge of the setting (rested or restless) and the type of non-stationarity (e.g., piece-wise constant, bounded variation). This is in striking contrast with previous negative results showing that no algorithm can achieve similar results as soon as rewards are allowed to increase. We confirm our theoretical findings on a number of synthetic and dataset-based experiments.} 1588 1588 abstract = {In many application domains (e.g., recommender systems, intelligent tutoring systems), the rewards associated to the available actions tend to decrease over time. This decay is either caused by the actions executed in the past (e.g., a user may get bored when songs of the same genre are recommended over and over) or by an external factor (e.g., content becomes outdated). These two situations can be modeled as specific instances of the rested and restless bandit settings, where arms are rotting (i.e., their value decrease over time). These problems were thought to be significantly different, since Levine et al. (2017) showed that state-of-the-art algorithms for restless bandit perform poorly in the rested rotting setting. In this paper, we introduce a novel algorithm, Rotting Adaptive Window UCB (RAW-UCB), that achieves near-optimal regret in both rotting rested and restless bandit, without any prior knowledge of the setting (rested or restless) and the type of non-stationarity (e.g., piece-wise constant, bounded variation). This is in striking contrast with previous negative results showing that no algorithm can achieve similar results as soon as rewards are allowed to increase. We confirm our theoretical findings on a number of synthetic and dataset-based experiments.}
} 1589 1589 }
1590 1590
@article{doi:10.3233/AIC-1994-7104, 1591 1591 @article{doi:10.3233/AIC-1994-7104,
author = {Agnar Aamodt and Enric Plaza}, 1592 1592 author = {Agnar Aamodt and Enric Plaza},
title ={Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches}, 1593 1593 title ={Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches},
journal = {AI Communications}, 1594 1594 journal = {AI Communications},
volume = {7}, 1595 1595 volume = {7},
number = {1}, 1596 1596 number = {1},
pages = {39-59}, 1597 1597 pages = {39-59},
year = {1994}, 1598 1598 year = {1994},
doi = {10.3233/AIC-1994-7104}, 1599 1599 doi = {10.3233/AIC-1994-7104},
URL = { 1600 1600 URL = {
https://journals.sagepub.com/doi/abs/10.3233/AIC-1994-7104 1601 1601 https://journals.sagepub.com/doi/abs/10.3233/AIC-1994-7104
}, 1602 1602 },
eprint = { 1603 1603 eprint = {
https://journals.sagepub.com/doi/pdf/10.3233/AIC-1994-7104 1604 1604 https://journals.sagepub.com/doi/pdf/10.3233/AIC-1994-7104
}, 1605 1605 },
abstract = { Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture. } 1606 1606 abstract = { Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture. }
} 1607 1607 }
1608 1608
@Book{schank+abelson77, 1609 1609 @Book{schank+abelson77,
author = {Roger C. Schank and Robert P. Abelson}, 1610 1610 author = {Roger C. Schank and Robert P. Abelson},
title = {Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures}, 1611 1611 title = {Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures},
publisher = {L. Erlbaum}, 1612 1612 publisher = {L. Erlbaum},
year = {1977}, 1613 1613 year = {1977},
address = {Hillsdale, NJ}, 1614 1614 address = {Hillsdale, NJ},
keywords = {PAM, SAM, TALE-SPIN, causality, conceptual dependency, goals, plans, scripts, semantic primitive, text understanding} 1615 1615 keywords = {PAM, SAM, TALE-SPIN, causality, conceptual dependency, goals, plans, scripts, semantic primitive, text understanding}
} 1616 1616 }
1617 1617
@article{KOLODNER1983281, 1618 1618 @article{KOLODNER1983281,
title = {Reconstructive memory: A computer model}, 1619 1619 title = {Reconstructive memory: A computer model},
journal = {Cognitive Science}, 1620 1620 journal = {Cognitive Science},
volume = {7}, 1621 1621 volume = {7},
number = {4}, 1622 1622 number = {4},
pages = {281-328}, 1623 1623 pages = {281-328},
year = {1983}, 1624 1624 year = {1983},
issn = {0364-0213}, 1625 1625 issn = {0364-0213},
doi = {https://doi.org/10.1016/S0364-0213(83)80002-0}, 1626 1626 doi = {https://doi.org/10.1016/S0364-0213(83)80002-0},
url = {https://www.sciencedirect.com/science/article/pii/S0364021383800020}, 1627 1627 url = {https://www.sciencedirect.com/science/article/pii/S0364021383800020},
author = {Janet L. Kolodner}, 1628 1628 author = {Janet L. Kolodner},
abstract = {This study presents a process model of very long-term episodic memory. The process presented is a reconstructive process. The process involves application of three kinds of reconstructive strategies—component-to-context instantiation strategies, component-instantiation strategies, and context-to-context instantiation strategies. The first is used to direct search to appropriate conceptual categories in memory. The other two are used to direct search within the chosen conceptual category. A fourth type of strategy, called executive search strategies, guide search for concepts related to the one targeted for retrieval. A conceptual memory organization implied by human reconstructive memory is presented along with examples which motivate it. A basic retrieval algorithm is presented for traversing that stucture. Retrieval strategies arise from failures in that algorithm. The memory organization and retrieval processes are implemented in a computer program called CYRUS which stores events in the lives of former Secretaries of State Cyrus Vance and Edmund Muskie and answers questions posed in English concerning that information. Examples which motivate the process model are drawn from protocols of human memory search. Examples of CYRUS'S behavior demonstrate the implemented process model. Conclusions are drawn concerning retrieval failures and the relationship of episodic and semantic memory.} 1629 1629 abstract = {This study presents a process model of very long-term episodic memory. The process presented is a reconstructive process. The process involves application of three kinds of reconstructive strategies—component-to-context instantiation strategies, component-instantiation strategies, and context-to-context instantiation strategies. The first is used to direct search to appropriate conceptual categories in memory. The other two are used to direct search within the chosen conceptual category. A fourth type of strategy, called executive search strategies, guide search for concepts related to the one targeted for retrieval. A conceptual memory organization implied by human reconstructive memory is presented along with examples which motivate it. A basic retrieval algorithm is presented for traversing that stucture. Retrieval strategies arise from failures in that algorithm. The memory organization and retrieval processes are implemented in a computer program called CYRUS which stores events in the lives of former Secretaries of State Cyrus Vance and Edmund Muskie and answers questions posed in English concerning that information. Examples which motivate the process model are drawn from protocols of human memory search. Examples of CYRUS'S behavior demonstrate the implemented process model. Conclusions are drawn concerning retrieval failures and the relationship of episodic and semantic memory.}
} 1630 1630 }
1631 1631
@Book{Riesbeck1989, 1632 1632 @Book{Riesbeck1989,
author = {Riesbeck C.K. and Schank R.C.}, 1633 1633 author = {Riesbeck C.K. and Schank R.C.},
year = {1989}, 1634 1634 year = {1989},
title = {Inside Case-Based Reasoning}, 1635 1635 title = {Inside Case-Based Reasoning},
publisher = {Psychology Press}, 1636 1636 publisher = {Psychology Press},
url = {https://doi.org/10.4324/9780203781821} 1637 1637 url = {https://doi.org/10.4324/9780203781821}
} 1638 1638 }
1639 1639
@article{ALABDULRAHMAN2021114061, 1640 1640 @article{ALABDULRAHMAN2021114061,
title = {Catering for unique tastes: Targeting grey-sheep users recommender systems through one-class machine learning}, 1641 1641 title = {Catering for unique tastes: Targeting grey-sheep users recommender systems through one-class machine learning},
journal = {Expert Systems with Applications}, 1642 1642 journal = {Expert Systems with Applications},
volume = {166}, 1643 1643 volume = {166},
pages = {114061}, 1644 1644 pages = {114061},
year = {2021}, 1645 1645 year = {2021},
issn = {0957-4174}, 1646 1646 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2020.114061}, 1647 1647 doi = {https://doi.org/10.1016/j.eswa.2020.114061},
url = {https://www.sciencedirect.com/science/article/pii/S0957417420308241}, 1648 1648 url = {https://www.sciencedirect.com/science/article/pii/S0957417420308241},
author = {Rabaa Alabdulrahman and Herna Viktor}, 1649 1649 author = {Rabaa Alabdulrahman and Herna Viktor},
keywords = {Recommender systems, Model-based systems, Machine learning, Grey-sheep, One-class classification}, 1650 1650 keywords = {Recommender systems, Model-based systems, Machine learning, Grey-sheep, One-class classification},
abstract = {In recommendation systems, the grey-sheep problem refers to users with unique preferences and tastes that make it difficult to develop accurate profiles. That is, the similarity search approach typically followed during the recommendation process fails to yield good results. Most research does not focus on such users and thus fails to cater to more exotic tastes and emerging trends, leading to a subsequent loss in revenue and marketing opportunities. One suggested solution is to use one-class classification to generate a prediction list for these users, where decision boundaries are learned that distinguish between normal and grey-sheep users. In this paper, we present the grey-sheep one-class recommendation (GSOR) framework designed to create accurate prediction models while taking both regular and grey-sheep users into account. In addition, we introduce a novel grey-sheep movie recommendation benchmark to be used by current and future researchers. When evaluating our GSOR framework against this benchmark, our results indicate the value of combining cluster analysis, outlier detection, and one-class learning to generate relevant and timely recommendation lists from data sets that contain grey-sheep users. Specifically, by employing one-class decision tree algorithms, our GSOR framework was able to outperform traditional collaborative filtering-based recommendation systems in both accuracy and model construction time. Furthermore, we report that having grey-sheep users in the system often had a positive impact on the learning and recommendation processes.} 1651 1651 abstract = {In recommendation systems, the grey-sheep problem refers to users with unique preferences and tastes that make it difficult to develop accurate profiles. That is, the similarity search approach typically followed during the recommendation process fails to yield good results. Most research does not focus on such users and thus fails to cater to more exotic tastes and emerging trends, leading to a subsequent loss in revenue and marketing opportunities. One suggested solution is to use one-class classification to generate a prediction list for these users, where decision boundaries are learned that distinguish between normal and grey-sheep users. In this paper, we present the grey-sheep one-class recommendation (GSOR) framework designed to create accurate prediction models while taking both regular and grey-sheep users into account. In addition, we introduce a novel grey-sheep movie recommendation benchmark to be used by current and future researchers. When evaluating our GSOR framework against this benchmark, our results indicate the value of combining cluster analysis, outlier detection, and one-class learning to generate relevant and timely recommendation lists from data sets that contain grey-sheep users. Specifically, by employing one-class decision tree algorithms, our GSOR framework was able to outperform traditional collaborative filtering-based recommendation systems in both accuracy and model construction time. Furthermore, we report that having grey-sheep users in the system often had a positive impact on the learning and recommendation processes.}
} 1652 1652 }
1653 1653
@article{HU2025127130, 1654 1654 @article{HU2025127130,
title = {A social importance and category enhanced cold-start user recommendation system}, 1655 1655 title = {A social importance and category enhanced cold-start user recommendation system},
journal = {Expert Systems with Applications}, 1656 1656 journal = {Expert Systems with Applications},
volume = {277}, 1657 1657 volume = {277},
pages = {127130}, 1658 1658 pages = {127130},
year = {2025}, 1659 1659 year = {2025},
issn = {0957-4174}, 1660 1660 issn = {0957-4174},
doi = {https://doi.org/10.1016/j.eswa.2025.127130}, 1661 1661 doi = {https://doi.org/10.1016/j.eswa.2025.127130},
url = {https://www.sciencedirect.com/science/article/pii/S0957417425007523}, 1662 1662 url = {https://www.sciencedirect.com/science/article/pii/S0957417425007523},
author = {Bin Hu and Yinghong Ma and Zhiyuan Liu and Hong Wang}, 1663 1663 author = {Bin Hu and Yinghong Ma and Zhiyuan Liu and Hong Wang},
keywords = {Social recommendation, Graph neural network, Cold-start users, Social importance, Category information}, 1664 1664 keywords = {Social recommendation, Graph neural network, Cold-start users, Social importance, Category information},
abstract = {Social recommendation, which utilizes social relations to enhance recommender systems, has gained increasing attention with the rapid development of online social platforms. Although numerous studies have underscored the efficacy of integrating personal social information to bolster the performance of such systems, social recommendations still face several problems. Firstly, the cold-start problem for items persists in recommendation tasks leveraging social information. Secondly, the importance of users within social networks is often disregarded, leading to biases in recommendation tasks utilizing social information. Thirdly, the lack of utilization of item category information makes learning representations of items and users insufficient. Hence, this paper proposes a novel social recommendation model, Social Importance and Category Enhanced Cold-Start User Recommendation System (SICERec). At first, potential preference information for cold-start users is incorporated into similar user modules, extracting user preference information from historical interaction data between users and items. After that, the significance of users within social networks is considered by integrating their centrality attributes, thereby enriching the semantic representation of users. Finally, category information of user historical interaction items is incorporated into the modeling process to enrich the semantics of items. Extensive experimental results demonstrate the significant advantages of our SICERec method. Our model exhibits a minimum improvement of 15.1% in RMSE and at least 26.2% in MAE compared to state-of-the-art models when evaluated on two real datasets. Additionally, ablation experiments are conducted to validate each module’s effectiveness and provide further insights into how users’ social attributes and preferences influence their choices. We release our code at https://github.com/BinHu129/SICERec.} 1665 1665 abstract = {Social recommendation, which utilizes social relations to enhance recommender systems, has gained increasing attention with the rapid development of online social platforms. Although numerous studies have underscored the efficacy of integrating personal social information to bolster the performance of such systems, social recommendations still face several problems. Firstly, the cold-start problem for items persists in recommendation tasks leveraging social information. Secondly, the importance of users within social networks is often disregarded, leading to biases in recommendation tasks utilizing social information. Thirdly, the lack of utilization of item category information makes learning representations of items and users insufficient. Hence, this paper proposes a novel social recommendation model, Social Importance and Category Enhanced Cold-Start User Recommendation System (SICERec). At first, potential preference information for cold-start users is incorporated into similar user modules, extracting user preference information from historical interaction data between users and items. After that, the significance of users within social networks is considered by integrating their centrality attributes, thereby enriching the semantic representation of users. Finally, category information of user historical interaction items is incorporated into the modeling process to enrich the semantics of items. Extensive experimental results demonstrate the significant advantages of our SICERec method. Our model exhibits a minimum improvement of 15.1% in RMSE and at least 26.2% in MAE compared to state-of-the-art models when evaluated on two real datasets. Additionally, ablation experiments are conducted to validate each module’s effectiveness and provide further insights into how users’ social attributes and preferences influence their choices. We release our code at https://github.com/BinHu129/SICERec.}
} 1666 1666 }
1667 1667
@inproceedings{wolf2024keep, 1668 1668 @inproceedings{wolf2024keep,
title={Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning}, 1669 1669 title={Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning},
author={Wolf, Tom Nuno and Bongratz, Fabian and Rickmann, Anne-Marie and P{\"o}lsterl, Sebastian and Wachinger, Christian}, 1670 1670 author={Wolf, Tom Nuno and Bongratz, Fabian and Rickmann, Anne-Marie and P{\"o}lsterl, Sebastian and Wachinger, Christian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, 1671 1671 booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38}, 1672 1672 volume={38},
pages={5921--5929}, 1673 1673 pages={5921--5929},
year={2024} 1674 1674 year={2024}
} 1675 1675 }
1676 1676
@article{PAREJASLLANOVARCED2024111469, 1677 1677 @article{PAREJASLLANOVARCED2024111469,
title = {Case-based selection of explanation methods for neural network image classifiers}, 1678 1678 title = {Case-based selection of explanation methods for neural network image classifiers},
journal = {Knowledge-Based Systems}, 1679 1679 journal = {Knowledge-Based Systems},
volume = {288}, 1680 1680 volume = {288},
pages = {111469}, 1681 1681 pages = {111469},
year = {2024}, 1682 1682 year = {2024},
issn = {0950-7051}, 1683 1683 issn = {0950-7051},
doi = {https://doi.org/10.1016/j.knosys.2024.111469}, 1684 1684 doi = {https://doi.org/10.1016/j.knosys.2024.111469},
url = {https://www.sciencedirect.com/science/article/pii/S0950705124001047}, 1685 1685 url = {https://www.sciencedirect.com/science/article/pii/S0950705124001047},
author = {Humberto Parejas-Llanovarced and Marta Caro-Martínez and Mauricio G. Orozco-del-Castillo and Juan A. Recio-García}, 1686 1686 author = {Humberto Parejas-Llanovarced and Marta Caro-Martínez and Mauricio G. Orozco-del-Castillo and Juan A. Recio-García},
This is BibTeX, Version 0.99d (TeX Live 2023) 1 1 This is BibTeX, Version 0.99d (TeX Live 2023)
Capacity: max_strings=200000, hash_size=200000, hash_prime=170003 2 2 Capacity: max_strings=200000, hash_size=200000, hash_prime=170003
The top-level auxiliary file: main.aux 3 3 The top-level auxiliary file: main.aux
A level-1 auxiliary file: ./chapters/contexte2.aux 4 4 A level-1 auxiliary file: ./chapters/contexte2.aux
A level-1 auxiliary file: ./chapters/EIAH.aux 5 5 A level-1 auxiliary file: ./chapters/EIAH.aux
A level-1 auxiliary file: ./chapters/CBR.aux 6 6 A level-1 auxiliary file: ./chapters/CBR.aux
A level-1 auxiliary file: ./chapters/Architecture.aux 7 7 A level-1 auxiliary file: ./chapters/Architecture.aux
A level-1 auxiliary file: ./chapters/ESCBR.aux 8 8 A level-1 auxiliary file: ./chapters/ESCBR.aux
A level-1 auxiliary file: ./chapters/TS.aux 9 9 A level-1 auxiliary file: ./chapters/TS.aux
A level-1 auxiliary file: ./chapters/Conclusions.aux 10 10 A level-1 auxiliary file: ./chapters/Conclusions.aux
A level-1 auxiliary file: ./chapters/Publications.aux 11 11 A level-1 auxiliary file: ./chapters/Publications.aux
The style file: apalike.bst 12 12 The style file: apalike.bst
Database file #1: main.bib 13 13 Database file #1: main.bib
Warning--entry type for "Daubias2011" isn't style-file defined 14 14 Warning--entry type for "Daubias2011" isn't style-file defined
--line 693 of file main.bib 15 15 --line 693 of file main.bib
Warning--to sort, need author or key in UCI 16 16 Warning--to sort, need author or key in UCI
You've used 84 entries, 17 17 Warning--to sort, need author or key in Data
18 You've used 85 entries,
1935 wiz_defined-function locations, 18 19 1935 wiz_defined-function locations,
1000 strings with 21030 characters, 19 20 1005 strings with 21157 characters,
and the built_in function-call counts, 37959 in all, are: 20 21 and the built_in function-call counts, 38168 in all, are:
= -- 3636 21 22 = -- 3659
> -- 1794 22 23 > -- 1794
< -- 56 23 24 < -- 56
+ -- 656 24 25 + -- 656
- -- 602 25 26 - -- 602
* -- 3257 26 27 * -- 3272
:= -- 6493 27 28 := -- 6527
add.period$ -- 270 28 29 add.period$ -- 274
call.type$ -- 84 29 30 call.type$ -- 85
change.case$ -- 696 30 31 change.case$ -- 701
chr.to.int$ -- 82 31 32 chr.to.int$ -- 83
cite$ -- 86 32 33 cite$ -- 89
duplicate$ -- 1432 33 34 duplicate$ -- 1442
empty$ -- 2563 34 35 empty$ -- 2581
format.name$ -- 731 35 36 format.name$ -- 731
if$ -- 7559 36 37 if$ -- 7600
int.to.chr$ -- 3 37 38 int.to.chr$ -- 3
int.to.str$ -- 0 38 39 int.to.str$ -- 0
missing$ -- 88 39 40 missing$ -- 88
newline$ -- 424 40 41 newline$ -- 430
num.names$ -- 279 41 42 num.names$ -- 279
pop$ -- 629 42 43 pop$ -- 632
preamble$ -- 1 43 44 preamble$ -- 1
purify$ -- 701 44 45 purify$ -- 706
quote$ -- 0 45 46 quote$ -- 0
skip$ -- 1094 46 47 skip$ -- 1101
stack$ -- 0 47 48 stack$ -- 0
substring$ -- 2550 48 49 substring$ -- 2563
swap$ -- 270 49 50 swap$ -- 270
text.length$ -- 24 50 51 text.length$ -- 24
text.prefix$ -- 0 51 52 text.prefix$ -- 0
top$ -- 0 52 53 top$ -- 0
type$ -- 492 53 54 type$ -- 498
warning$ -- 1 54 55 warning$ -- 2
while$ -- 284 55 56 while$ -- 284
width$ -- 0 56 57 width$ -- 0
write$ -- 1122 57 58 write$ -- 1135
(There were 2 warnings) 58 59 (There were 3 warnings)
59 60
This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) (preloaded format=pdflatex 2023.5.31) 11 JUL 2025 22:56 1 1 This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) (preloaded format=pdflatex 2023.5.31) 11 JUL 2025 23:11
entering extended mode 2 2 entering extended mode
restricted \write18 enabled. 3 3 restricted \write18 enabled.
%&-line parsing enabled. 4 4 %&-line parsing enabled.
**main.tex 5 5 **main.tex
(./main.tex 6 6 (./main.tex
LaTeX2e <2022-11-01> patch level 1 7 7 LaTeX2e <2022-11-01> patch level 1
L3 programming layer <2023-05-22> (./spimufcphdthesis.cls 8 8 L3 programming layer <2023-05-22> (./spimufcphdthesis.cls
Document Class: spimufcphdthesis 2022/02/10 9 9 Document Class: spimufcphdthesis 2022/02/10
10 10
(/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-docum 11 11 (/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-docum
ent.cls 12 12 ent.cls
Document Class: upmethodology-document 2022/10/04 13 13 Document Class: upmethodology-document 2022/10/04
(./upmethodology-p-common.sty 14 14 (./upmethodology-p-common.sty
Package: upmethodology-p-common 2015/04/24 15 15 Package: upmethodology-p-common 2015/04/24
16 16
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/ifthen.sty 17 17 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/ifthen.sty
Package: ifthen 2022/04/13 v1.1d Standard LaTeX ifthen package (DPC) 18 18 Package: ifthen 2022/04/13 v1.1d Standard LaTeX ifthen package (DPC)
) 19 19 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/xspace.sty 20 20 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/xspace.sty
Package: xspace 2014/10/28 v1.13 Space after command names (DPC,MH) 21 21 Package: xspace 2014/10/28 v1.13 Space after command names (DPC,MH)
) 22 22 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/xcolor/xcolor.sty 23 23 (/usr/local/texlive/2023/texmf-dist/tex/latex/xcolor/xcolor.sty
Package: xcolor 2022/06/12 v2.14 LaTeX color extensions (UK) 24 24 Package: xcolor 2022/06/12 v2.14 LaTeX color extensions (UK)
25 25
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/color.cfg 26 26 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/color.cfg
File: color.cfg 2016/01/02 v1.6 sample color configuration 27 27 File: color.cfg 2016/01/02 v1.6 sample color configuration
) 28 28 )
Package xcolor Info: Driver file: pdftex.def on input line 227. 29 29 Package xcolor Info: Driver file: pdftex.def on input line 227.
30 30
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-def/pdftex.def 31 31 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-def/pdftex.def
File: pdftex.def 2022/09/22 v1.2b Graphics/color driver for pdftex 32 32 File: pdftex.def 2022/09/22 v1.2b Graphics/color driver for pdftex
) 33 33 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/mathcolor.ltx) 34 34 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/mathcolor.ltx)
Package xcolor Info: Model `cmy' substituted by `cmy0' on input line 1353. 35 35 Package xcolor Info: Model `cmy' substituted by `cmy0' on input line 1353.
Package xcolor Info: Model `hsb' substituted by `rgb' on input line 1357. 36 36 Package xcolor Info: Model `hsb' substituted by `rgb' on input line 1357.
Package xcolor Info: Model `RGB' extended on input line 1369. 37 37 Package xcolor Info: Model `RGB' extended on input line 1369.
Package xcolor Info: Model `HTML' substituted by `rgb' on input line 1371. 38 38 Package xcolor Info: Model `HTML' substituted by `rgb' on input line 1371.
Package xcolor Info: Model `Hsb' substituted by `hsb' on input line 1372. 39 39 Package xcolor Info: Model `Hsb' substituted by `hsb' on input line 1372.
Package xcolor Info: Model `tHsb' substituted by `hsb' on input line 1373. 40 40 Package xcolor Info: Model `tHsb' substituted by `hsb' on input line 1373.
Package xcolor Info: Model `HSB' substituted by `hsb' on input line 1374. 41 41 Package xcolor Info: Model `HSB' substituted by `hsb' on input line 1374.
Package xcolor Info: Model `Gray' substituted by `gray' on input line 1375. 42 42 Package xcolor Info: Model `Gray' substituted by `gray' on input line 1375.
Package xcolor Info: Model `wave' substituted by `hsb' on input line 1376. 43 43 Package xcolor Info: Model `wave' substituted by `hsb' on input line 1376.
) 44 44 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifpdf.sty 45 45 (/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifpdf.sty
Package: ifpdf 2019/10/25 v3.4 ifpdf legacy package. Use iftex instead. 46 46 Package: ifpdf 2019/10/25 v3.4 ifpdf legacy package. Use iftex instead.
47 47
(/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/iftex.sty 48 48 (/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/iftex.sty
Package: iftex 2022/02/03 v1.0f TeX engine tests 49 49 Package: iftex 2022/02/03 v1.0f TeX engine tests
)) 50 50 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/UPMVERSION.def)) 51 51 (/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/UPMVERSION.def))
*********** UPMETHODOLOGY BOOK CLASS (WITH PART AND CHAPTER) 52 52 *********** UPMETHODOLOGY BOOK CLASS (WITH PART AND CHAPTER)
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/book.cls 53 53 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/book.cls
Document Class: book 2022/07/02 v1.4n Standard LaTeX document class 54 54 Document Class: book 2022/07/02 v1.4n Standard LaTeX document class
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/bk11.clo 55 55 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/bk11.clo
File: bk11.clo 2022/07/02 v1.4n Standard LaTeX file (size option) 56 56 File: bk11.clo 2022/07/02 v1.4n Standard LaTeX file (size option)
) 57 57 )
\c@part=\count185 58 58 \c@part=\count185
\c@chapter=\count186 59 59 \c@chapter=\count186
\c@section=\count187 60 60 \c@section=\count187
\c@subsection=\count188 61 61 \c@subsection=\count188
\c@subsubsection=\count189 62 62 \c@subsubsection=\count189
\c@paragraph=\count190 63 63 \c@paragraph=\count190
\c@subparagraph=\count191 64 64 \c@subparagraph=\count191
\c@figure=\count192 65 65 \c@figure=\count192
\c@table=\count193 66 66 \c@table=\count193
\abovecaptionskip=\skip48 67 67 \abovecaptionskip=\skip48
\belowcaptionskip=\skip49 68 68 \belowcaptionskip=\skip49
\bibindent=\dimen140 69 69 \bibindent=\dimen140
) 70 70 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/a4wide/a4wide.sty 71 71 (/usr/local/texlive/2023/texmf-dist/tex/latex/a4wide/a4wide.sty
Package: a4wide 1994/08/30 72 72 Package: a4wide 1994/08/30
73 73
(/usr/local/texlive/2023/texmf-dist/tex/latex/ntgclass/a4.sty 74 74 (/usr/local/texlive/2023/texmf-dist/tex/latex/ntgclass/a4.sty
Package: a4 2023/01/10 v1.2g A4 based page layout 75 75 Package: a4 2023/01/10 v1.2g A4 based page layout
)) 76 76 ))
(./upmethodology-document.sty 77 77 (./upmethodology-document.sty
Package: upmethodology-document 2015/04/24 78 78 Package: upmethodology-document 2015/04/24
79 79
**** upmethodology-document is using French language **** 80 80 **** upmethodology-document is using French language ****
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel/babel.sty 81 81 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel/babel.sty
Package: babel 2023/05/11 v3.89 The Babel package 82 82 Package: babel 2023/05/11 v3.89 The Babel package
\babel@savecnt=\count194 83 83 \babel@savecnt=\count194
\U@D=\dimen141 84 84 \U@D=\dimen141
\l@unhyphenated=\language87 85 85 \l@unhyphenated=\language87
86 86
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel/txtbabel.def) 87 87 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel/txtbabel.def)
\bbl@readstream=\read2 88 88 \bbl@readstream=\read2
\bbl@dirlevel=\count195 89 89 \bbl@dirlevel=\count195
90 90
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf 91 91 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf
Language: french 2023/03/08 v3.5q French support from the babel system 92 92 Language: french 2023/03/08 v3.5q French support from the babel system
Package babel Info: Hyphen rules for 'acadian' set to \l@french 93 93 Package babel Info: Hyphen rules for 'acadian' set to \l@french
(babel) (\language29). Reported on input line 91. 94 94 (babel) (\language29). Reported on input line 91.
Package babel Info: Hyphen rules for 'canadien' set to \l@french 95 95 Package babel Info: Hyphen rules for 'canadien' set to \l@french
(babel) (\language29). Reported on input line 92. 96 96 (babel) (\language29). Reported on input line 92.
\FB@nonchar=\count196 97 97 \FB@nonchar=\count196
Package babel Info: Making : an active character on input line 395. 98 98 Package babel Info: Making : an active character on input line 395.
Package babel Info: Making ; an active character on input line 396. 99 99 Package babel Info: Making ; an active character on input line 396.
Package babel Info: Making ! an active character on input line 397. 100 100 Package babel Info: Making ! an active character on input line 397.
Package babel Info: Making ? an active character on input line 398. 101 101 Package babel Info: Making ? an active character on input line 398.
\FBguill@level=\count197 102 102 \FBguill@level=\count197
\FBold@everypar=\toks16 103 103 \FBold@everypar=\toks16
\FB@Mht=\dimen142 104 104 \FB@Mht=\dimen142
\mc@charclass=\count198 105 105 \mc@charclass=\count198
\mc@charfam=\count199 106 106 \mc@charfam=\count199
\mc@charslot=\count266 107 107 \mc@charslot=\count266
\std@mcc=\count267 108 108 \std@mcc=\count267
\dec@mcc=\count268 109 109 \dec@mcc=\count268
\FB@parskip=\dimen143 110 110 \FB@parskip=\dimen143
\listindentFB=\dimen144 111 111 \listindentFB=\dimen144
\descindentFB=\dimen145 112 112 \descindentFB=\dimen145
\labelindentFB=\dimen146 113 113 \labelindentFB=\dimen146
\labelwidthFB=\dimen147 114 114 \labelwidthFB=\dimen147
\leftmarginFB=\dimen148 115 115 \leftmarginFB=\dimen148
\parindentFFN=\dimen149 116 116 \parindentFFN=\dimen149
\FBfnindent=\dimen150 117 117 \FBfnindent=\dimen150
) 118 118 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/frenchb.ldf 119 119 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/frenchb.ldf
Language: frenchb 2023/03/08 v3.5q French support from the babel system 120 120 Language: frenchb 2023/03/08 v3.5q French support from the babel system
121 121
122 122
Package babel-french Warning: Option `frenchb' for Babel is *deprecated*, 123 123 Package babel-french Warning: Option `frenchb' for Babel is *deprecated*,
(babel-french) it might be removed sooner or later. Please 124 124 (babel-french) it might be removed sooner or later. Please
(babel-french) use `french' instead; reported on input line 35. 125 125 (babel-french) use `french' instead; reported on input line 35.
126 126
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf 127 127 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel-french/french.ldf
Language: french 2023/03/08 v3.5q French support from the babel system 128 128 Language: french 2023/03/08 v3.5q French support from the babel system
))) 129 129 )))
(/usr/local/texlive/2023/texmf-dist/tex/generic/babel/locale/fr/babel-french.te 130 130 (/usr/local/texlive/2023/texmf-dist/tex/generic/babel/locale/fr/babel-french.te
x 131 131 x
Package babel Info: Importing font and identification data for french 132 132 Package babel Info: Importing font and identification data for french
(babel) from babel-fr.ini. Reported on input line 11. 133 133 (babel) from babel-fr.ini. Reported on input line 11.
) (/usr/local/texlive/2023/texmf-dist/tex/latex/carlisle/scalefnt.sty) 134 134 ) (/usr/local/texlive/2023/texmf-dist/tex/latex/carlisle/scalefnt.sty)
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/keyval.sty 135 135 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/keyval.sty
Package: keyval 2022/05/29 v1.15 key=value parser (DPC) 136 136 Package: keyval 2022/05/29 v1.15 key=value parser (DPC)
\KV@toks@=\toks17 137 137 \KV@toks@=\toks17
) 138 138 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/vmargin/vmargin.sty 139 139 (/usr/local/texlive/2023/texmf-dist/tex/latex/vmargin/vmargin.sty
Package: vmargin 2004/07/15 V2.5 set document margins (VK) 140 140 Package: vmargin 2004/07/15 V2.5 set document margins (VK)
141 141
Package: vmargin 2004/07/15 V2.5 set document margins (VK) 142 142 Package: vmargin 2004/07/15 V2.5 set document margins (VK)
\PaperWidth=\dimen151 143 143 \PaperWidth=\dimen151
\PaperHeight=\dimen152 144 144 \PaperHeight=\dimen152
) (./upmethodology-extension.sty 145 145 ) (./upmethodology-extension.sty
Package: upmethodology-extension 2012/09/21 146 146 Package: upmethodology-extension 2012/09/21
\upmext@tmp@putx=\skip50 147 147 \upmext@tmp@putx=\skip50
148 148
*** define extension value frontillustrationsize **** 149 149 *** define extension value frontillustrationsize ****
*** define extension value watermarksize **** 150 150 *** define extension value watermarksize ****
*** undefine extension value publisher **** 151 151 *** undefine extension value publisher ****
*** undefine extension value copyrighter **** 152 152 *** undefine extension value copyrighter ****
*** undefine extension value printedin ****) 153 153 *** undefine extension value printedin ****)
(/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-fmt.s 154 154 (/usr/local/texlive/2023/texmf-dist/tex/latex/upmethodology/upmethodology-fmt.s
ty 155 155 ty
Package: upmethodology-fmt 2022/10/04 156 156 Package: upmethodology-fmt 2022/10/04
**** upmethodology-fmt is using French language **** 157 157 **** upmethodology-fmt is using French language ****
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphicx.sty 158 158 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphicx.sty
Package: graphicx 2021/09/16 v1.2d Enhanced LaTeX Graphics (DPC,SPQR) 159 159 Package: graphicx 2021/09/16 v1.2d Enhanced LaTeX Graphics (DPC,SPQR)
160 160
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphics.sty 161 161 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/graphics.sty
Package: graphics 2022/03/10 v1.4e Standard LaTeX Graphics (DPC,SPQR) 162 162 Package: graphics 2022/03/10 v1.4e Standard LaTeX Graphics (DPC,SPQR)
163 163
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/trig.sty 164 164 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics/trig.sty
Package: trig 2021/08/11 v1.11 sin cos tan (DPC) 165 165 Package: trig 2021/08/11 v1.11 sin cos tan (DPC)
) 166 166 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/graphics.cfg 167 167 (/usr/local/texlive/2023/texmf-dist/tex/latex/graphics-cfg/graphics.cfg
File: graphics.cfg 2016/06/04 v1.11 sample graphics configuration 168 168 File: graphics.cfg 2016/06/04 v1.11 sample graphics configuration
) 169 169 )
Package graphics Info: Driver file: pdftex.def on input line 107. 170 170 Package graphics Info: Driver file: pdftex.def on input line 107.
) 171 171 )
\Gin@req@height=\dimen153 172 172 \Gin@req@height=\dimen153
\Gin@req@width=\dimen154 173 173 \Gin@req@width=\dimen154
) 174 174 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/caption/subcaption.sty 175 175 (/usr/local/texlive/2023/texmf-dist/tex/latex/caption/subcaption.sty
Package: subcaption 2023/02/19 v1.6 Sub-captions (AR) 176 176 Package: subcaption 2023/02/19 v1.6 Sub-captions (AR)
177 177
(/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption.sty 178 178 (/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption.sty
Package: caption 2023/03/12 v3.6j Customizing captions (AR) 179 179 Package: caption 2023/03/12 v3.6j Customizing captions (AR)
180 180
(/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption3.sty 181 181 (/usr/local/texlive/2023/texmf-dist/tex/latex/caption/caption3.sty
Package: caption3 2023/03/12 v2.4 caption3 kernel (AR) 182 182 Package: caption3 2023/03/12 v2.4 caption3 kernel (AR)
\caption@tempdima=\dimen155 183 183 \caption@tempdima=\dimen155
\captionmargin=\dimen156 184 184 \captionmargin=\dimen156
\caption@leftmargin=\dimen157 185 185 \caption@leftmargin=\dimen157
\caption@rightmargin=\dimen158 186 186 \caption@rightmargin=\dimen158
\caption@width=\dimen159 187 187 \caption@width=\dimen159
\caption@indent=\dimen160 188 188 \caption@indent=\dimen160
\caption@parindent=\dimen161 189 189 \caption@parindent=\dimen161
\caption@hangindent=\dimen162 190 190 \caption@hangindent=\dimen162
Package caption Info: Standard document class detected. 191 191 Package caption Info: Standard document class detected.
Package caption Info: french babel package is loaded. 192 192 Package caption Info: french babel package is loaded.
) 193 193 )
\c@caption@flags=\count269 194 194 \c@caption@flags=\count269
\c@continuedfloat=\count270 195 195 \c@continuedfloat=\count270
) 196 196 )
Package caption Info: New subtype `subfigure' on input line 239. 197 197 Package caption Info: New subtype `subfigure' on input line 239.
\c@subfigure=\count271 198 198 \c@subfigure=\count271
Package caption Info: New subtype `subtable' on input line 239. 199 199 Package caption Info: New subtype `subtable' on input line 239.
\c@subtable=\count272 200 200 \c@subtable=\count272
) 201 201 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/tabularx.sty 202 202 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/tabularx.sty
Package: tabularx 2020/01/15 v2.11c `tabularx' package (DPC) 203 203 Package: tabularx 2020/01/15 v2.11c `tabularx' package (DPC)
204 204
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/array.sty 205 205 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/array.sty
Package: array 2022/09/04 v2.5g Tabular extension package (FMi) 206 206 Package: array 2022/09/04 v2.5g Tabular extension package (FMi)
\col@sep=\dimen163 207 207 \col@sep=\dimen163
\ar@mcellbox=\box51 208 208 \ar@mcellbox=\box51
\extrarowheight=\dimen164 209 209 \extrarowheight=\dimen164
\NC@list=\toks18 210 210 \NC@list=\toks18
\extratabsurround=\skip51 211 211 \extratabsurround=\skip51
\backup@length=\skip52 212 212 \backup@length=\skip52
\ar@cellbox=\box52 213 213 \ar@cellbox=\box52
) 214 214 )
\TX@col@width=\dimen165 215 215 \TX@col@width=\dimen165
\TX@old@table=\dimen166 216 216 \TX@old@table=\dimen166
\TX@old@col=\dimen167 217 217 \TX@old@col=\dimen167
\TX@target=\dimen168 218 218 \TX@target=\dimen168
\TX@delta=\dimen169 219 219 \TX@delta=\dimen169
\TX@cols=\count273 220 220 \TX@cols=\count273
\TX@ftn=\toks19 221 221 \TX@ftn=\toks19
) 222 222 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/multicol.sty 223 223 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/multicol.sty
Package: multicol 2021/11/30 v1.9d multicolumn formatting (FMi) 224 224 Package: multicol 2021/11/30 v1.9d multicolumn formatting (FMi)
\c@tracingmulticols=\count274 225 225 \c@tracingmulticols=\count274
\mult@box=\box53 226 226 \mult@box=\box53
\multicol@leftmargin=\dimen170 227 227 \multicol@leftmargin=\dimen170
\c@unbalance=\count275 228 228 \c@unbalance=\count275
\c@collectmore=\count276 229 229 \c@collectmore=\count276
\doublecol@number=\count277 230 230 \doublecol@number=\count277
\multicoltolerance=\count278 231 231 \multicoltolerance=\count278
\multicolpretolerance=\count279 232 232 \multicolpretolerance=\count279
\full@width=\dimen171 233 233 \full@width=\dimen171
\page@free=\dimen172 234 234 \page@free=\dimen172
\premulticols=\dimen173 235 235 \premulticols=\dimen173
\postmulticols=\dimen174 236 236 \postmulticols=\dimen174
\multicolsep=\skip53 237 237 \multicolsep=\skip53
\multicolbaselineskip=\skip54 238 238 \multicolbaselineskip=\skip54
\partial@page=\box54 239 239 \partial@page=\box54
\last@line=\box55 240 240 \last@line=\box55
\maxbalancingoverflow=\dimen175 241 241 \maxbalancingoverflow=\dimen175
\mult@rightbox=\box56 242 242 \mult@rightbox=\box56
\mult@grightbox=\box57 243 243 \mult@grightbox=\box57
\mult@firstbox=\box58 244 244 \mult@firstbox=\box58
\mult@gfirstbox=\box59 245 245 \mult@gfirstbox=\box59
\@tempa=\box60 246 246 \@tempa=\box60
\@tempa=\box61 247 247 \@tempa=\box61
\@tempa=\box62 248 248 \@tempa=\box62
\@tempa=\box63 249 249 \@tempa=\box63
\@tempa=\box64 250 250 \@tempa=\box64
\@tempa=\box65 251 251 \@tempa=\box65
\@tempa=\box66 252 252 \@tempa=\box66
\@tempa=\box67 253 253 \@tempa=\box67
\@tempa=\box68 254 254 \@tempa=\box68
\@tempa=\box69 255 255 \@tempa=\box69
\@tempa=\box70 256 256 \@tempa=\box70
\@tempa=\box71 257 257 \@tempa=\box71
\@tempa=\box72 258 258 \@tempa=\box72
\@tempa=\box73 259 259 \@tempa=\box73
\@tempa=\box74 260 260 \@tempa=\box74
\@tempa=\box75 261 261 \@tempa=\box75
\@tempa=\box76 262 262 \@tempa=\box76
\@tempa=\box77 263 263 \@tempa=\box77
\@tempa=\box78 264 264 \@tempa=\box78
\@tempa=\box79 265 265 \@tempa=\box79
\@tempa=\box80 266 266 \@tempa=\box80
\@tempa=\box81 267 267 \@tempa=\box81
\@tempa=\box82 268 268 \@tempa=\box82
\@tempa=\box83 269 269 \@tempa=\box83
\@tempa=\box84 270 270 \@tempa=\box84
\@tempa=\box85 271 271 \@tempa=\box85
\@tempa=\box86 272 272 \@tempa=\box86
\@tempa=\box87 273 273 \@tempa=\box87
\@tempa=\box88 274 274 \@tempa=\box88
\@tempa=\box89 275 275 \@tempa=\box89
\@tempa=\box90 276 276 \@tempa=\box90
\@tempa=\box91 277 277 \@tempa=\box91
\@tempa=\box92 278 278 \@tempa=\box92
\@tempa=\box93 279 279 \@tempa=\box93
\@tempa=\box94 280 280 \@tempa=\box94
\@tempa=\box95 281 281 \@tempa=\box95
\c@minrows=\count280 282 282 \c@minrows=\count280
\c@columnbadness=\count281 283 283 \c@columnbadness=\count281
\c@finalcolumnbadness=\count282 284 284 \c@finalcolumnbadness=\count282
\last@try=\dimen176 285 285 \last@try=\dimen176
\multicolovershoot=\dimen177 286 286 \multicolovershoot=\dimen177
\multicolundershoot=\dimen178 287 287 \multicolundershoot=\dimen178
\mult@nat@firstbox=\box96 288 288 \mult@nat@firstbox=\box96
\colbreak@box=\box97 289 289 \colbreak@box=\box97
\mc@col@check@num=\count283 290 290 \mc@col@check@num=\count283
) 291 291 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/colortbl/colortbl.sty 292 292 (/usr/local/texlive/2023/texmf-dist/tex/latex/colortbl/colortbl.sty
Package: colortbl 2022/06/20 v1.0f Color table columns (DPC) 293 293 Package: colortbl 2022/06/20 v1.0f Color table columns (DPC)
\everycr=\toks20 294 294 \everycr=\toks20
\minrowclearance=\skip55 295 295 \minrowclearance=\skip55
\rownum=\count284 296 296 \rownum=\count284
) 297 297 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/picinpar/picinpar.sty 298 298 (/usr/local/texlive/2023/texmf-dist/tex/latex/picinpar/picinpar.sty
Pictures in Paragraphs. Version 1.3, November 22, 2022 299 299 Pictures in Paragraphs. Version 1.3, November 22, 2022
\br=\count285 300 300 \br=\count285
\bl=\count286 301 301 \bl=\count286
\na=\count287 302 302 \na=\count287
\nb=\count288 303 303 \nb=\count288
\tcdsav=\count289 304 304 \tcdsav=\count289
\tcl=\count290 305 305 \tcl=\count290
\tcd=\count291 306 306 \tcd=\count291
\tcn=\count292 307 307 \tcn=\count292
\cumtcl=\count293 308 308 \cumtcl=\count293
\cumpartcl=\count294 309 309 \cumpartcl=\count294
\lftside=\dimen179 310 310 \lftside=\dimen179
\rtside=\dimen180 311 311 \rtside=\dimen180
\hpic=\dimen181 312 312 \hpic=\dimen181
\vpic=\dimen182 313 313 \vpic=\dimen182
\strutilg=\dimen183 314 314 \strutilg=\dimen183
\picwd=\dimen184 315 315 \picwd=\dimen184
\topheight=\dimen185 316 316 \topheight=\dimen185
\ilg=\dimen186 317 317 \ilg=\dimen186
\lpic=\dimen187 318 318 \lpic=\dimen187
\lwindowsep=\dimen188 319 319 \lwindowsep=\dimen188
\rwindowsep=\dimen189 320 320 \rwindowsep=\dimen189
\cumpar=\dimen190 321 321 \cumpar=\dimen190
\twa=\toks21 322 322 \twa=\toks21
\la=\toks22 323 323 \la=\toks22
\ra=\toks23 324 324 \ra=\toks23
\ha=\toks24 325 325 \ha=\toks24
\pictoc=\toks25 326 326 \pictoc=\toks25
\rawtext=\box98 327 327 \rawtext=\box98
\holder=\box99 328 328 \holder=\box99
\windowbox=\box100 329 329 \windowbox=\box100
\wartext=\box101 330 330 \wartext=\box101
\finaltext=\box102 331 331 \finaltext=\box102
\aslice=\box103 332 332 \aslice=\box103
\bslice=\box104 333 333 \bslice=\box104
\wbox=\box105 334 334 \wbox=\box105
\wstrutbox=\box106 335 335 \wstrutbox=\box106
\picbox=\box107 336 336 \picbox=\box107
\waslice=\box108 337 337 \waslice=\box108
\wbslice=\box109 338 338 \wbslice=\box109
\fslice=\box110 339 339 \fslice=\box110
) (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsmath.sty 340 340 ) (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsmath.sty
Package: amsmath 2022/04/08 v2.17n AMS math features 341 341 Package: amsmath 2022/04/08 v2.17n AMS math features
\@mathmargin=\skip56 342 342 \@mathmargin=\skip56
343 343
For additional information on amsmath, use the `?' option. 344 344 For additional information on amsmath, use the `?' option.
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amstext.sty 345 345 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amstext.sty
Package: amstext 2021/08/26 v2.01 AMS text 346 346 Package: amstext 2021/08/26 v2.01 AMS text
347 347
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsgen.sty 348 348 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsgen.sty
File: amsgen.sty 1999/11/30 v2.0 generic functions 349 349 File: amsgen.sty 1999/11/30 v2.0 generic functions
\@emptytoks=\toks26 350 350 \@emptytoks=\toks26
\ex@=\dimen191 351 351 \ex@=\dimen191
)) 352 352 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsbsy.sty 353 353 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsbsy.sty
Package: amsbsy 1999/11/29 v1.2d Bold Symbols 354 354 Package: amsbsy 1999/11/29 v1.2d Bold Symbols
\pmbraise@=\dimen192 355 355 \pmbraise@=\dimen192
) 356 356 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsopn.sty 357 357 (/usr/local/texlive/2023/texmf-dist/tex/latex/amsmath/amsopn.sty
Package: amsopn 2022/04/08 v2.04 operator names 358 358 Package: amsopn 2022/04/08 v2.04 operator names
) 359 359 )
\inf@bad=\count295 360 360 \inf@bad=\count295
LaTeX Info: Redefining \frac on input line 234. 361 361 LaTeX Info: Redefining \frac on input line 234.
\uproot@=\count296 362 362 \uproot@=\count296
\leftroot@=\count297 363 363 \leftroot@=\count297
LaTeX Info: Redefining \overline on input line 399. 364 364 LaTeX Info: Redefining \overline on input line 399.
LaTeX Info: Redefining \colon on input line 410. 365 365 LaTeX Info: Redefining \colon on input line 410.
\classnum@=\count298 366 366 \classnum@=\count298
\DOTSCASE@=\count299 367 367 \DOTSCASE@=\count299
LaTeX Info: Redefining \ldots on input line 496. 368 368 LaTeX Info: Redefining \ldots on input line 496.
LaTeX Info: Redefining \dots on input line 499. 369 369 LaTeX Info: Redefining \dots on input line 499.
LaTeX Info: Redefining \cdots on input line 620. 370 370 LaTeX Info: Redefining \cdots on input line 620.
\Mathstrutbox@=\box111 371 371 \Mathstrutbox@=\box111
\strutbox@=\box112 372 372 \strutbox@=\box112
LaTeX Info: Redefining \big on input line 722. 373 373 LaTeX Info: Redefining \big on input line 722.
LaTeX Info: Redefining \Big on input line 723. 374 374 LaTeX Info: Redefining \Big on input line 723.
LaTeX Info: Redefining \bigg on input line 724. 375 375 LaTeX Info: Redefining \bigg on input line 724.
LaTeX Info: Redefining \Bigg on input line 725. 376 376 LaTeX Info: Redefining \Bigg on input line 725.
\big@size=\dimen193 377 377 \big@size=\dimen193
LaTeX Font Info: Redeclaring font encoding OML on input line 743. 378 378 LaTeX Font Info: Redeclaring font encoding OML on input line 743.
LaTeX Font Info: Redeclaring font encoding OMS on input line 744. 379 379 LaTeX Font Info: Redeclaring font encoding OMS on input line 744.
\macc@depth=\count300 380 380 \macc@depth=\count300
LaTeX Info: Redefining \bmod on input line 905. 381 381 LaTeX Info: Redefining \bmod on input line 905.
LaTeX Info: Redefining \pmod on input line 910. 382 382 LaTeX Info: Redefining \pmod on input line 910.
LaTeX Info: Redefining \smash on input line 940. 383 383 LaTeX Info: Redefining \smash on input line 940.
LaTeX Info: Redefining \relbar on input line 970. 384 384 LaTeX Info: Redefining \relbar on input line 970.
LaTeX Info: Redefining \Relbar on input line 971. 385 385 LaTeX Info: Redefining \Relbar on input line 971.
\c@MaxMatrixCols=\count301 386 386 \c@MaxMatrixCols=\count301
\dotsspace@=\muskip16 387 387 \dotsspace@=\muskip16
\c@parentequation=\count302 388 388 \c@parentequation=\count302
\dspbrk@lvl=\count303 389 389 \dspbrk@lvl=\count303
\tag@help=\toks27 390 390 \tag@help=\toks27
\row@=\count304 391 391 \row@=\count304
\column@=\count305 392 392 \column@=\count305
\maxfields@=\count306 393 393 \maxfields@=\count306
\andhelp@=\toks28 394 394 \andhelp@=\toks28
\eqnshift@=\dimen194 395 395 \eqnshift@=\dimen194
\alignsep@=\dimen195 396 396 \alignsep@=\dimen195
\tagshift@=\dimen196 397 397 \tagshift@=\dimen196
\tagwidth@=\dimen197 398 398 \tagwidth@=\dimen197
\totwidth@=\dimen198 399 399 \totwidth@=\dimen198
\lineht@=\dimen199 400 400 \lineht@=\dimen199
\@envbody=\toks29 401 401 \@envbody=\toks29
\multlinegap=\skip57 402 402 \multlinegap=\skip57
\multlinetaggap=\skip58 403 403 \multlinetaggap=\skip58
\mathdisplay@stack=\toks30 404 404 \mathdisplay@stack=\toks30
LaTeX Info: Redefining \[ on input line 2953. 405 405 LaTeX Info: Redefining \[ on input line 2953.
LaTeX Info: Redefining \] on input line 2954. 406 406 LaTeX Info: Redefining \] on input line 2954.
) 407 407 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/amscls/amsthm.sty 408 408 (/usr/local/texlive/2023/texmf-dist/tex/latex/amscls/amsthm.sty
Package: amsthm 2020/05/29 v2.20.6 409 409 Package: amsthm 2020/05/29 v2.20.6
\thm@style=\toks31 410 410 \thm@style=\toks31
\thm@bodyfont=\toks32 411 411 \thm@bodyfont=\toks32
\thm@headfont=\toks33 412 412 \thm@headfont=\toks33
\thm@notefont=\toks34 413 413 \thm@notefont=\toks34
\thm@headpunct=\toks35 414 414 \thm@headpunct=\toks35
\thm@preskip=\skip59 415 415 \thm@preskip=\skip59
\thm@postskip=\skip60 416 416 \thm@postskip=\skip60
\thm@headsep=\skip61 417 417 \thm@headsep=\skip61
\dth@everypar=\toks36 418 418 \dth@everypar=\toks36
) 419 419 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thmtools.sty 420 420 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thmtools.sty
Package: thmtools 2023/05/04 v0.76 421 421 Package: thmtools 2023/05/04 v0.76
\thmt@toks=\toks37 422 422 \thmt@toks=\toks37
\c@thmt@dummyctr=\count307 423 423 \c@thmt@dummyctr=\count307
424 424
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-patch.sty 425 425 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-patch.sty
Package: thm-patch 2023/05/04 v0.76 426 426 Package: thm-patch 2023/05/04 v0.76
427 427
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/parseargs.sty 428 428 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/parseargs.sty
Package: parseargs 2023/05/04 v0.76 429 429 Package: parseargs 2023/05/04 v0.76
\@parsespec=\toks38 430 430 \@parsespec=\toks38
)) 431 431 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-kv.sty 432 432 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-kv.sty
Package: thm-kv 2023/05/04 v0.76 433 433 Package: thm-kv 2023/05/04 v0.76
Package thm-kv Info: Theorem names will be uppercased on input line 42. 434 434 Package thm-kv Info: Theorem names will be uppercased on input line 42.
435 435
(/usr/local/texlive/2023/texmf-dist/tex/latex/kvsetkeys/kvsetkeys.sty 436 436 (/usr/local/texlive/2023/texmf-dist/tex/latex/kvsetkeys/kvsetkeys.sty
Package: kvsetkeys 2022-10-05 v1.19 Key value parser (HO) 437 437 Package: kvsetkeys 2022-10-05 v1.19 Key value parser (HO)
) 438 438 )
Package thm-kv Info: kvsetkeys patch (v1.16 or later) on input line 158. 439 439 Package thm-kv Info: kvsetkeys patch (v1.16 or later) on input line 158.
) 440 440 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-autoref.sty 441 441 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-autoref.sty
Package: thm-autoref 2023/05/04 v0.76 442 442 Package: thm-autoref 2023/05/04 v0.76
443 443
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/aliasctr.sty 444 444 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/aliasctr.sty
Package: aliasctr 2023/05/04 v0.76 445 445 Package: aliasctr 2023/05/04 v0.76
)) 446 446 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-listof.sty 447 447 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-listof.sty
Package: thm-listof 2023/05/04 v0.76 448 448 Package: thm-listof 2023/05/04 v0.76
) 449 449 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-restate.sty 450 450 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-restate.sty
Package: thm-restate 2023/05/04 v0.76 451 451 Package: thm-restate 2023/05/04 v0.76
) 452 452 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-amsthm.sty 453 453 (/usr/local/texlive/2023/texmf-dist/tex/latex/thmtools/thm-amsthm.sty
Package: thm-amsthm 2023/05/04 v0.76 454 454 Package: thm-amsthm 2023/05/04 v0.76
\thmt@style@headstyle=\toks39 455 455 \thmt@style@headstyle=\toks39
)) 456 456 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/pifont.sty 457 457 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/pifont.sty
Package: pifont 2020/03/25 PSNFSS-v9.3 Pi font support (SPQR) 458 458 Package: pifont 2020/03/25 PSNFSS-v9.3 Pi font support (SPQR)
LaTeX Font Info: Trying to load font information for U+pzd on input line 63. 459 459 LaTeX Font Info: Trying to load font information for U+pzd on input line 63.
460 460
461 461
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upzd.fd 462 462 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upzd.fd
File: upzd.fd 2001/06/04 font definitions for U/pzd. 463 463 File: upzd.fd 2001/06/04 font definitions for U/pzd.
) 464 464 )
LaTeX Font Info: Trying to load font information for U+psy on input line 64. 465 465 LaTeX Font Info: Trying to load font information for U+psy on input line 64.
466 466
467 467
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upsy.fd 468 468 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/upsy.fd
File: upsy.fd 2001/06/04 font definitions for U/psy. 469 469 File: upsy.fd 2001/06/04 font definitions for U/psy.
)) 470 470 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/setspace/setspace.sty 471 471 (/usr/local/texlive/2023/texmf-dist/tex/latex/setspace/setspace.sty
Package: setspace 2022/12/04 v6.7b set line spacing 472 472 Package: setspace 2022/12/04 v6.7b set line spacing
) 473 473 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/varioref.sty 474 474 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/varioref.sty
Package: varioref 2022/01/09 v1.6f package for extended references (FMi) 475 475 Package: varioref 2022/01/09 v1.6f package for extended references (FMi)
\c@vrcnt=\count308 476 476 \c@vrcnt=\count308
) 477 477 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/txfonts.sty 478 478 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/txfonts.sty
Package: txfonts 2008/01/22 v3.2.1 479 479 Package: txfonts 2008/01/22 v3.2.1
LaTeX Font Info: Redeclaring symbol font `operators' on input line 21. 480 480 LaTeX Font Info: Redeclaring symbol font `operators' on input line 21.
LaTeX Font Info: Overwriting symbol font `operators' in version `normal' 481 481 LaTeX Font Info: Overwriting symbol font `operators' in version `normal'
(Font) OT1/cmr/m/n --> OT1/txr/m/n on input line 21. 482 482 (Font) OT1/cmr/m/n --> OT1/txr/m/n on input line 21.
LaTeX Font Info: Overwriting symbol font `operators' in version `bold' 483 483 LaTeX Font Info: Overwriting symbol font `operators' in version `bold'
(Font) OT1/cmr/bx/n --> OT1/txr/m/n on input line 21. 484 484 (Font) OT1/cmr/bx/n --> OT1/txr/m/n on input line 21.
LaTeX Font Info: Overwriting symbol font `operators' in version `bold' 485 485 LaTeX Font Info: Overwriting symbol font `operators' in version `bold'
(Font) OT1/txr/m/n --> OT1/txr/bx/n on input line 22. 486 486 (Font) OT1/txr/m/n --> OT1/txr/bx/n on input line 22.
\symitalic=\mathgroup4 487 487 \symitalic=\mathgroup4
LaTeX Font Info: Overwriting symbol font `italic' in version `bold' 488 488 LaTeX Font Info: Overwriting symbol font `italic' in version `bold'
(Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 26. 489 489 (Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 26.
LaTeX Font Info: Redeclaring math alphabet \mathbf on input line 29. 490 490 LaTeX Font Info: Redeclaring math alphabet \mathbf on input line 29.
LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal' 491 491 LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal'
(Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29. 492 492 (Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29.
LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `bold' 493 493 LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `bold'
(Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29. 494 494 (Font) OT1/cmr/bx/n --> OT1/txr/bx/n on input line 29.
LaTeX Font Info: Redeclaring math alphabet \mathit on input line 30. 495 495 LaTeX Font Info: Redeclaring math alphabet \mathit on input line 30.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal' 496 496 LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal'
(Font) OT1/cmr/m/it --> OT1/txr/m/it on input line 30. 497 497 (Font) OT1/cmr/m/it --> OT1/txr/m/it on input line 30.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold' 498 498 LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold'
(Font) OT1/cmr/bx/it --> OT1/txr/m/it on input line 30. 499 499 (Font) OT1/cmr/bx/it --> OT1/txr/m/it on input line 30.
LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold' 500 500 LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold'
(Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 31. 501 501 (Font) OT1/txr/m/it --> OT1/txr/bx/it on input line 31.
LaTeX Font Info: Redeclaring math alphabet \mathsf on input line 40. 502 502 LaTeX Font Info: Redeclaring math alphabet \mathsf on input line 40.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal' 503 503 LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal'
(Font) OT1/cmss/m/n --> OT1/txss/m/n on input line 40. 504 504 (Font) OT1/cmss/m/n --> OT1/txss/m/n on input line 40.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' 505 505 LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold'
(Font) OT1/cmss/bx/n --> OT1/txss/m/n on input line 40. 506 506 (Font) OT1/cmss/bx/n --> OT1/txss/m/n on input line 40.
LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' 507 507 LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold'
(Font) OT1/txss/m/n --> OT1/txss/b/n on input line 41. 508 508 (Font) OT1/txss/m/n --> OT1/txss/b/n on input line 41.
LaTeX Font Info: Redeclaring math alphabet \mathtt on input line 50. 509 509 LaTeX Font Info: Redeclaring math alphabet \mathtt on input line 50.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal' 510 510 LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal'
(Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50. 511 511 (Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' 512 512 LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold'
(Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50. 513 513 (Font) OT1/cmtt/m/n --> OT1/txtt/m/n on input line 50.
LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' 514 514 LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold'
(Font) OT1/txtt/m/n --> OT1/txtt/b/n on input line 51. 515 515 (Font) OT1/txtt/m/n --> OT1/txtt/b/n on input line 51.
LaTeX Font Info: Redeclaring symbol font `letters' on input line 58. 516 516 LaTeX Font Info: Redeclaring symbol font `letters' on input line 58.
LaTeX Font Info: Overwriting symbol font `letters' in version `normal' 517 517 LaTeX Font Info: Overwriting symbol font `letters' in version `normal'
(Font) OML/cmm/m/it --> OML/txmi/m/it on input line 58. 518 518 (Font) OML/cmm/m/it --> OML/txmi/m/it on input line 58.
LaTeX Font Info: Overwriting symbol font `letters' in version `bold' 519 519 LaTeX Font Info: Overwriting symbol font `letters' in version `bold'
(Font) OML/cmm/b/it --> OML/txmi/m/it on input line 58. 520 520 (Font) OML/cmm/b/it --> OML/txmi/m/it on input line 58.
LaTeX Font Info: Overwriting symbol font `letters' in version `bold' 521 521 LaTeX Font Info: Overwriting symbol font `letters' in version `bold'
(Font) OML/txmi/m/it --> OML/txmi/bx/it on input line 59. 522 522 (Font) OML/txmi/m/it --> OML/txmi/bx/it on input line 59.
\symlettersA=\mathgroup5 523 523 \symlettersA=\mathgroup5
LaTeX Font Info: Overwriting symbol font `lettersA' in version `bold' 524 524 LaTeX Font Info: Overwriting symbol font `lettersA' in version `bold'
(Font) U/txmia/m/it --> U/txmia/bx/it on input line 67. 525 525 (Font) U/txmia/m/it --> U/txmia/bx/it on input line 67.
LaTeX Font Info: Redeclaring symbol font `symbols' on input line 77. 526 526 LaTeX Font Info: Redeclaring symbol font `symbols' on input line 77.
LaTeX Font Info: Overwriting symbol font `symbols' in version `normal' 527 527 LaTeX Font Info: Overwriting symbol font `symbols' in version `normal'
(Font) OMS/cmsy/m/n --> OMS/txsy/m/n on input line 77. 528 528 (Font) OMS/cmsy/m/n --> OMS/txsy/m/n on input line 77.
LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' 529 529 LaTeX Font Info: Overwriting symbol font `symbols' in version `bold'
(Font) OMS/cmsy/b/n --> OMS/txsy/m/n on input line 77. 530 530 (Font) OMS/cmsy/b/n --> OMS/txsy/m/n on input line 77.
LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' 531 531 LaTeX Font Info: Overwriting symbol font `symbols' in version `bold'
(Font) OMS/txsy/m/n --> OMS/txsy/bx/n on input line 78. 532 532 (Font) OMS/txsy/m/n --> OMS/txsy/bx/n on input line 78.
\symAMSa=\mathgroup6 533 533 \symAMSa=\mathgroup6
LaTeX Font Info: Overwriting symbol font `AMSa' in version `bold' 534 534 LaTeX Font Info: Overwriting symbol font `AMSa' in version `bold'
(Font) U/txsya/m/n --> U/txsya/bx/n on input line 94. 535 535 (Font) U/txsya/m/n --> U/txsya/bx/n on input line 94.
\symAMSb=\mathgroup7 536 536 \symAMSb=\mathgroup7
LaTeX Font Info: Overwriting symbol font `AMSb' in version `bold' 537 537 LaTeX Font Info: Overwriting symbol font `AMSb' in version `bold'
(Font) U/txsyb/m/n --> U/txsyb/bx/n on input line 103. 538 538 (Font) U/txsyb/m/n --> U/txsyb/bx/n on input line 103.
\symsymbolsC=\mathgroup8 539 539 \symsymbolsC=\mathgroup8
LaTeX Font Info: Overwriting symbol font `symbolsC' in version `bold' 540 540 LaTeX Font Info: Overwriting symbol font `symbolsC' in version `bold'
(Font) U/txsyc/m/n --> U/txsyc/bx/n on input line 113. 541 541 (Font) U/txsyc/m/n --> U/txsyc/bx/n on input line 113.
LaTeX Font Info: Redeclaring symbol font `largesymbols' on input line 120. 542 542 LaTeX Font Info: Redeclaring symbol font `largesymbols' on input line 120.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal' 543 543 LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal'
(Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120. 544 544 (Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' 545 545 LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold'
(Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120. 546 546 (Font) OMX/cmex/m/n --> OMX/txex/m/n on input line 120.
LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' 547 547 LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold'
(Font) OMX/txex/m/n --> OMX/txex/bx/n on input line 121. 548 548 (Font) OMX/txex/m/n --> OMX/txex/bx/n on input line 121.
\symlargesymbolsA=\mathgroup9 549 549 \symlargesymbolsA=\mathgroup9
LaTeX Font Info: Overwriting symbol font `largesymbolsA' in version `bold' 550 550 LaTeX Font Info: Overwriting symbol font `largesymbolsA' in version `bold'
(Font) U/txexa/m/n --> U/txexa/bx/n on input line 129. 551 551 (Font) U/txexa/m/n --> U/txexa/bx/n on input line 129.
LaTeX Font Info: Redeclaring math symbol \mathsterling on input line 164. 552 552 LaTeX Font Info: Redeclaring math symbol \mathsterling on input line 164.
LaTeX Font Info: Redeclaring math symbol \hbar on input line 591. 553 553 LaTeX Font Info: Redeclaring math symbol \hbar on input line 591.
LaTeX Info: Redefining \not on input line 1043. 554 554 LaTeX Info: Redefining \not on input line 1043.
LaTeX Info: Redefining \textsquare on input line 1063. 555 555 LaTeX Info: Redefining \textsquare on input line 1063.
LaTeX Info: Redefining \openbox on input line 1064. 556 556 LaTeX Info: Redefining \openbox on input line 1064.
) 557 557 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/relsize/relsize.sty 558 558 (/usr/local/texlive/2023/texmf-dist/tex/latex/relsize/relsize.sty
Package: relsize 2013/03/29 ver 4.1 559 559 Package: relsize 2013/03/29 ver 4.1
) 560 560 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/xkeyval/xkeyval.sty 561 561 (/usr/local/texlive/2023/texmf-dist/tex/latex/xkeyval/xkeyval.sty
Package: xkeyval 2022/06/16 v2.9 package option processing (HA) 562 562 Package: xkeyval 2022/06/16 v2.9 package option processing (HA)
563 563
(/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkeyval.tex 564 564 (/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkeyval.tex
(/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkvutils.tex 565 565 (/usr/local/texlive/2023/texmf-dist/tex/generic/xkeyval/xkvutils.tex
\XKV@toks=\toks40 566 566 \XKV@toks=\toks40
\XKV@tempa@toks=\toks41 567 567 \XKV@tempa@toks=\toks41
) 568 568 )
\XKV@depth=\count309 569 569 \XKV@depth=\count309
File: xkeyval.tex 2014/12/03 v2.7a key=value parser (HA) 570 570 File: xkeyval.tex 2014/12/03 v2.7a key=value parser (HA)
)) 571 571 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyphenat/hyphenat.sty 572 572 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyphenat/hyphenat.sty
Package: hyphenat 2009/09/02 v2.3c hyphenation utilities 573 573 Package: hyphenat 2009/09/02 v2.3c hyphenation utilities
\langwohyphens=\language88 574 574 \langwohyphens=\language88
LaTeX Info: Redefining \_ on input line 43. 575 575 LaTeX Info: Redefining \_ on input line 43.
) 576 576 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/bbm-macros/bbm.sty 577 577 (/usr/local/texlive/2023/texmf-dist/tex/latex/bbm-macros/bbm.sty
Package: bbm 1999/03/15 V 1.2 provides fonts for set symbols - TH 578 578 Package: bbm 1999/03/15 V 1.2 provides fonts for set symbols - TH
LaTeX Font Info: Overwriting math alphabet `\mathbbm' in version `bold' 579 579 LaTeX Font Info: Overwriting math alphabet `\mathbbm' in version `bold'
(Font) U/bbm/m/n --> U/bbm/bx/n on input line 33. 580 580 (Font) U/bbm/m/n --> U/bbm/bx/n on input line 33.
LaTeX Font Info: Overwriting math alphabet `\mathbbmss' in version `bold' 581 581 LaTeX Font Info: Overwriting math alphabet `\mathbbmss' in version `bold'
(Font) U/bbmss/m/n --> U/bbmss/bx/n on input line 35. 582 582 (Font) U/bbmss/m/n --> U/bbmss/bx/n on input line 35.
) 583 583 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/environ/environ.sty 584 584 (/usr/local/texlive/2023/texmf-dist/tex/latex/environ/environ.sty
Package: environ 2014/05/04 v0.3 A new way to define environments 585 585 Package: environ 2014/05/04 v0.3 A new way to define environments
586 586
(/usr/local/texlive/2023/texmf-dist/tex/latex/trimspaces/trimspaces.sty 587 587 (/usr/local/texlive/2023/texmf-dist/tex/latex/trimspaces/trimspaces.sty
Package: trimspaces 2009/09/17 v1.1 Trim spaces around a token list 588 588 Package: trimspaces 2009/09/17 v1.1 Trim spaces around a token list
)) 589 589 ))
\c@upm@subfigure@count=\count310 590 590 \c@upm@subfigure@count=\count310
\c@upm@fmt@mtabular@columnnumber=\count311 591 591 \c@upm@fmt@mtabular@columnnumber=\count311
\c@upm@format@section@sectionlevel=\count312 592 592 \c@upm@format@section@sectionlevel=\count312
\c@upm@fmt@savedcounter=\count313 593 593 \c@upm@fmt@savedcounter=\count313
\c@@@upm@fmt@inlineenumeration=\count314 594 594 \c@@@upm@fmt@inlineenumeration=\count314
\c@@upm@fmt@enumdescription@cnt@=\count315 595 595 \c@@upm@fmt@enumdescription@cnt@=\count315
\upm@framed@minipage=\box113 596 596 \upm@framed@minipage=\box113
\upm@highlight@box@save=\box114 597 597 \upm@highlight@box@save=\box114
\c@upmdefinition=\count316 598 598 \c@upmdefinition=\count316
) 599 599 )
(./upmethodology-version.sty 600 600 (./upmethodology-version.sty
Package: upmethodology-version 2013/08/26 601 601 Package: upmethodology-version 2013/08/26
602 602
**** upmethodology-version is using French language **** 603 603 **** upmethodology-version is using French language ****
\upm@tmp@a=\count317 604 604 \upm@tmp@a=\count317
) 605 605 )
\listendskip=\skip62 606 606 \listendskip=\skip62
) 607 607 )
(./upmethodology-frontpage.sty 608 608 (./upmethodology-frontpage.sty
Package: upmethodology-frontpage 2015/06/26 609 609 Package: upmethodology-frontpage 2015/06/26
610 610
**** upmethodology-frontpage is using French language **** 611 611 **** upmethodology-frontpage is using French language ****
\upm@front@tmpa=\dimen256 612 612 \upm@front@tmpa=\dimen256
\upm@front@tmpb=\dimen257 613 613 \upm@front@tmpb=\dimen257
614 614
*** define extension value frontillustrationsize ****) 615 615 *** define extension value frontillustrationsize ****)
(./upmethodology-backpage.sty 616 616 (./upmethodology-backpage.sty
Package: upmethodology-backpage 2013/12/14 617 617 Package: upmethodology-backpage 2013/12/14
618 618
**** upmethodology-backpage is using French language ****) 619 619 **** upmethodology-backpage is using French language ****)
(/usr/local/texlive/2023/texmf-dist/tex/latex/url/url.sty 620 620 (/usr/local/texlive/2023/texmf-dist/tex/latex/url/url.sty
\Urlmuskip=\muskip17 621 621 \Urlmuskip=\muskip17
Package: url 2013/09/16 ver 3.4 Verb mode for urls, etc. 622 622 Package: url 2013/09/16 ver 3.4 Verb mode for urls, etc.
) 623 623 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hyperref.sty 624 624 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hyperref.sty
Package: hyperref 2023-05-16 v7.00y Hypertext links for LaTeX 625 625 Package: hyperref 2023-05-16 v7.00y Hypertext links for LaTeX
626 626
(/usr/local/texlive/2023/texmf-dist/tex/generic/ltxcmds/ltxcmds.sty 627 627 (/usr/local/texlive/2023/texmf-dist/tex/generic/ltxcmds/ltxcmds.sty
Package: ltxcmds 2020-05-10 v1.25 LaTeX kernel commands for general use (HO) 628 628 Package: ltxcmds 2020-05-10 v1.25 LaTeX kernel commands for general use (HO)
) 629 629 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/pdftexcmds/pdftexcmds.sty 630 630 (/usr/local/texlive/2023/texmf-dist/tex/generic/pdftexcmds/pdftexcmds.sty
Package: pdftexcmds 2020-06-27 v0.33 Utility functions of pdfTeX for LuaTeX (HO 631 631 Package: pdftexcmds 2020-06-27 v0.33 Utility functions of pdfTeX for LuaTeX (HO
) 632 632 )
633 633
(/usr/local/texlive/2023/texmf-dist/tex/generic/infwarerr/infwarerr.sty 634 634 (/usr/local/texlive/2023/texmf-dist/tex/generic/infwarerr/infwarerr.sty
Package: infwarerr 2019/12/03 v1.5 Providing info/warning/error messages (HO) 635 635 Package: infwarerr 2019/12/03 v1.5 Providing info/warning/error messages (HO)
) 636 636 )
Package pdftexcmds Info: \pdf@primitive is available. 637 637 Package pdftexcmds Info: \pdf@primitive is available.
Package pdftexcmds Info: \pdf@ifprimitive is available. 638 638 Package pdftexcmds Info: \pdf@ifprimitive is available.
Package pdftexcmds Info: \pdfdraftmode found. 639 639 Package pdftexcmds Info: \pdfdraftmode found.
) 640 640 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/kvdefinekeys/kvdefinekeys.sty 641 641 (/usr/local/texlive/2023/texmf-dist/tex/generic/kvdefinekeys/kvdefinekeys.sty
Package: kvdefinekeys 2019-12-19 v1.6 Define keys (HO) 642 642 Package: kvdefinekeys 2019-12-19 v1.6 Define keys (HO)
) 643 643 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/pdfescape/pdfescape.sty 644 644 (/usr/local/texlive/2023/texmf-dist/tex/generic/pdfescape/pdfescape.sty
Package: pdfescape 2019/12/09 v1.15 Implements pdfTeX's escape features (HO) 645 645 Package: pdfescape 2019/12/09 v1.15 Implements pdfTeX's escape features (HO)
) 646 646 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/hycolor/hycolor.sty 647 647 (/usr/local/texlive/2023/texmf-dist/tex/latex/hycolor/hycolor.sty
Package: hycolor 2020-01-27 v1.10 Color options for hyperref/bookmark (HO) 648 648 Package: hycolor 2020-01-27 v1.10 Color options for hyperref/bookmark (HO)
) 649 649 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/letltxmacro/letltxmacro.sty 650 650 (/usr/local/texlive/2023/texmf-dist/tex/latex/letltxmacro/letltxmacro.sty
Package: letltxmacro 2019/12/03 v1.6 Let assignment for LaTeX macros (HO) 651 651 Package: letltxmacro 2019/12/03 v1.6 Let assignment for LaTeX macros (HO)
) 652 652 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/auxhook/auxhook.sty 653 653 (/usr/local/texlive/2023/texmf-dist/tex/latex/auxhook/auxhook.sty
Package: auxhook 2019-12-17 v1.6 Hooks for auxiliary files (HO) 654 654 Package: auxhook 2019-12-17 v1.6 Hooks for auxiliary files (HO)
) 655 655 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/nameref.sty 656 656 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/nameref.sty
Package: nameref 2023-05-16 v2.51 Cross-referencing by name of section 657 657 Package: nameref 2023-05-16 v2.51 Cross-referencing by name of section
658 658
(/usr/local/texlive/2023/texmf-dist/tex/latex/refcount/refcount.sty 659 659 (/usr/local/texlive/2023/texmf-dist/tex/latex/refcount/refcount.sty
Package: refcount 2019/12/15 v3.6 Data extraction from label references (HO) 660 660 Package: refcount 2019/12/15 v3.6 Data extraction from label references (HO)
) 661 661 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/gettitlestring/gettitlestring.s 662 662 (/usr/local/texlive/2023/texmf-dist/tex/generic/gettitlestring/gettitlestring.s
ty 663 663 ty
Package: gettitlestring 2019/12/15 v1.6 Cleanup title references (HO) 664 664 Package: gettitlestring 2019/12/15 v1.6 Cleanup title references (HO)
(/usr/local/texlive/2023/texmf-dist/tex/latex/kvoptions/kvoptions.sty 665 665 (/usr/local/texlive/2023/texmf-dist/tex/latex/kvoptions/kvoptions.sty
Package: kvoptions 2022-06-15 v3.15 Key value format for package options (HO) 666 666 Package: kvoptions 2022-06-15 v3.15 Key value format for package options (HO)
)) 667 667 ))
\c@section@level=\count318 668 668 \c@section@level=\count318
) 669 669 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/etoolbox/etoolbox.sty 670 670 (/usr/local/texlive/2023/texmf-dist/tex/latex/etoolbox/etoolbox.sty
Package: etoolbox 2020/10/05 v2.5k e-TeX tools for LaTeX (JAW) 671 671 Package: etoolbox 2020/10/05 v2.5k e-TeX tools for LaTeX (JAW)
\etb@tempcnta=\count319 672 672 \etb@tempcnta=\count319
) 673 673 )
\@linkdim=\dimen258 674 674 \@linkdim=\dimen258
\Hy@linkcounter=\count320 675 675 \Hy@linkcounter=\count320
\Hy@pagecounter=\count321 676 676 \Hy@pagecounter=\count321
677 677
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/pd1enc.def 678 678 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/pd1enc.def
File: pd1enc.def 2023-05-16 v7.00y Hyperref: PDFDocEncoding definition (HO) 679 679 File: pd1enc.def 2023-05-16 v7.00y Hyperref: PDFDocEncoding definition (HO)
Now handling font encoding PD1 ... 680 680 Now handling font encoding PD1 ...
... no UTF-8 mapping file for font encoding PD1 681 681 ... no UTF-8 mapping file for font encoding PD1
) 682 682 )
(/usr/local/texlive/2023/texmf-dist/tex/generic/intcalc/intcalc.sty 683 683 (/usr/local/texlive/2023/texmf-dist/tex/generic/intcalc/intcalc.sty
Package: intcalc 2019/12/15 v1.3 Expandable calculations with integers (HO) 684 684 Package: intcalc 2019/12/15 v1.3 Expandable calculations with integers (HO)
) 685 685 )
\Hy@SavedSpaceFactor=\count322 686 686 \Hy@SavedSpaceFactor=\count322
687 687
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/puenc.def 688 688 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/puenc.def
File: puenc.def 2023-05-16 v7.00y Hyperref: PDF Unicode definition (HO) 689 689 File: puenc.def 2023-05-16 v7.00y Hyperref: PDF Unicode definition (HO)
Now handling font encoding PU ... 690 690 Now handling font encoding PU ...
... no UTF-8 mapping file for font encoding PU 691 691 ... no UTF-8 mapping file for font encoding PU
) 692 692 )
Package hyperref Info: Option `breaklinks' set `true' on input line 4050. 693 693 Package hyperref Info: Option `breaklinks' set `true' on input line 4050.
Package hyperref Info: Option `pageanchor' set `true' on input line 4050. 694 694 Package hyperref Info: Option `pageanchor' set `true' on input line 4050.
Package hyperref Info: Option `bookmarks' set `false' on input line 4050. 695 695 Package hyperref Info: Option `bookmarks' set `false' on input line 4050.
Package hyperref Info: Option `hyperfigures' set `true' on input line 4050. 696 696 Package hyperref Info: Option `hyperfigures' set `true' on input line 4050.
Package hyperref Info: Option `hyperindex' set `true' on input line 4050. 697 697 Package hyperref Info: Option `hyperindex' set `true' on input line 4050.
Package hyperref Info: Option `linktocpage' set `true' on input line 4050. 698 698 Package hyperref Info: Option `linktocpage' set `true' on input line 4050.
Package hyperref Info: Option `bookmarks' set `true' on input line 4050. 699 699 Package hyperref Info: Option `bookmarks' set `true' on input line 4050.
Package hyperref Info: Option `bookmarksopen' set `true' on input line 4050. 700 700 Package hyperref Info: Option `bookmarksopen' set `true' on input line 4050.
Package hyperref Info: Option `bookmarksnumbered' set `true' on input line 4050 701 701 Package hyperref Info: Option `bookmarksnumbered' set `true' on input line 4050
. 702 702 .
Package hyperref Info: Option `colorlinks' set `false' on input line 4050. 703 703 Package hyperref Info: Option `colorlinks' set `false' on input line 4050.
Package hyperref Info: Hyper figures ON on input line 4165. 704 704 Package hyperref Info: Hyper figures ON on input line 4165.
Package hyperref Info: Link nesting OFF on input line 4172. 705 705 Package hyperref Info: Link nesting OFF on input line 4172.
Package hyperref Info: Hyper index ON on input line 4175. 706 706 Package hyperref Info: Hyper index ON on input line 4175.
Package hyperref Info: Plain pages OFF on input line 4182. 707 707 Package hyperref Info: Plain pages OFF on input line 4182.
Package hyperref Info: Backreferencing OFF on input line 4187. 708 708 Package hyperref Info: Backreferencing OFF on input line 4187.
Package hyperref Info: Implicit mode ON; LaTeX internals redefined. 709 709 Package hyperref Info: Implicit mode ON; LaTeX internals redefined.
Package hyperref Info: Bookmarks ON on input line 4434. 710 710 Package hyperref Info: Bookmarks ON on input line 4434.
LaTeX Info: Redefining \href on input line 4683. 711 711 LaTeX Info: Redefining \href on input line 4683.
\c@Hy@tempcnt=\count323 712 712 \c@Hy@tempcnt=\count323
LaTeX Info: Redefining \url on input line 4772. 713 713 LaTeX Info: Redefining \url on input line 4772.
\XeTeXLinkMargin=\dimen259 714 714 \XeTeXLinkMargin=\dimen259
715 715
(/usr/local/texlive/2023/texmf-dist/tex/generic/bitset/bitset.sty 716 716 (/usr/local/texlive/2023/texmf-dist/tex/generic/bitset/bitset.sty
Package: bitset 2019/12/09 v1.3 Handle bit-vector datatype (HO) 717 717 Package: bitset 2019/12/09 v1.3 Handle bit-vector datatype (HO)
718 718
(/usr/local/texlive/2023/texmf-dist/tex/generic/bigintcalc/bigintcalc.sty 719 719 (/usr/local/texlive/2023/texmf-dist/tex/generic/bigintcalc/bigintcalc.sty
Package: bigintcalc 2019/12/15 v1.5 Expandable calculations on big integers (HO 720 720 Package: bigintcalc 2019/12/15 v1.5 Expandable calculations on big integers (HO
) 721 721 )
)) 722 722 ))
\Fld@menulength=\count324 723 723 \Fld@menulength=\count324
\Field@Width=\dimen260 724 724 \Field@Width=\dimen260
\Fld@charsize=\dimen261 725 725 \Fld@charsize=\dimen261
Package hyperref Info: Hyper figures ON on input line 6049. 726 726 Package hyperref Info: Hyper figures ON on input line 6049.
Package hyperref Info: Link nesting OFF on input line 6056. 727 727 Package hyperref Info: Link nesting OFF on input line 6056.
Package hyperref Info: Hyper index ON on input line 6059. 728 728 Package hyperref Info: Hyper index ON on input line 6059.
Package hyperref Info: backreferencing OFF on input line 6066. 729 729 Package hyperref Info: backreferencing OFF on input line 6066.
Package hyperref Info: Link coloring OFF on input line 6071. 730 730 Package hyperref Info: Link coloring OFF on input line 6071.
Package hyperref Info: Link coloring with OCG OFF on input line 6076. 731 731 Package hyperref Info: Link coloring with OCG OFF on input line 6076.
Package hyperref Info: PDF/A mode OFF on input line 6081. 732 732 Package hyperref Info: PDF/A mode OFF on input line 6081.
733 733
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/atbegshi-ltx.sty 734 734 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/atbegshi-ltx.sty
Package: atbegshi-ltx 2021/01/10 v1.0c Emulation of the original atbegshi 735 735 Package: atbegshi-ltx 2021/01/10 v1.0c Emulation of the original atbegshi
package with kernel methods 736 736 package with kernel methods
) 737 737 )
\Hy@abspage=\count325 738 738 \Hy@abspage=\count325
\c@Item=\count326 739 739 \c@Item=\count326
\c@Hfootnote=\count327 740 740 \c@Hfootnote=\count327
) 741 741 )
Package hyperref Info: Driver: hpdftex. 742 742 Package hyperref Info: Driver: hpdftex.
743 743
(/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hpdftex.def 744 744 (/usr/local/texlive/2023/texmf-dist/tex/latex/hyperref/hpdftex.def
File: hpdftex.def 2023-05-16 v7.00y Hyperref driver for pdfTeX 745 745 File: hpdftex.def 2023-05-16 v7.00y Hyperref driver for pdfTeX
746 746
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/atveryend-ltx.sty 747 747 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/atveryend-ltx.sty
Package: atveryend-ltx 2020/08/19 v1.0a Emulation of the original atveryend pac 748 748 Package: atveryend-ltx 2020/08/19 v1.0a Emulation of the original atveryend pac
kage 749 749 kage
with kernel methods 750 750 with kernel methods
) 751 751 )
\Fld@listcount=\count328 752 752 \Fld@listcount=\count328
\c@bookmark@seq@number=\count329 753 753 \c@bookmark@seq@number=\count329
754 754
(/usr/local/texlive/2023/texmf-dist/tex/latex/rerunfilecheck/rerunfilecheck.sty 755 755 (/usr/local/texlive/2023/texmf-dist/tex/latex/rerunfilecheck/rerunfilecheck.sty
Package: rerunfilecheck 2022-07-10 v1.10 Rerun checks for auxiliary files (HO) 756 756 Package: rerunfilecheck 2022-07-10 v1.10 Rerun checks for auxiliary files (HO)
757 757
(/usr/local/texlive/2023/texmf-dist/tex/generic/uniquecounter/uniquecounter.sty 758 758 (/usr/local/texlive/2023/texmf-dist/tex/generic/uniquecounter/uniquecounter.sty
Package: uniquecounter 2019/12/15 v1.4 Provide unlimited unique counter (HO) 759 759 Package: uniquecounter 2019/12/15 v1.4 Provide unlimited unique counter (HO)
) 760 760 )
Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2 761 761 Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2
85. 762 762 85.
) 763 763 )
\Hy@SectionHShift=\skip63 764 764 \Hy@SectionHShift=\skip63
) 765 765 )
\upm@smalllogo@height=\dimen262 766 766 \upm@smalllogo@height=\dimen262
) (./spimbasephdthesis.sty 767 767 ) (./spimbasephdthesis.sty
Package: spimbasephdthesis 2015/09/01 768 768 Package: spimbasephdthesis 2015/09/01
769 769
(/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.sty 770 770 (/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.sty
File: lettrine.sty 2023-04-18 v2.40 (Daniel Flipo) 771 771 File: lettrine.sty 2023-04-18 v2.40 (Daniel Flipo)
772 772
(/usr/local/texlive/2023/texmf-dist/tex/latex/l3packages/xfp/xfp.sty 773 773 (/usr/local/texlive/2023/texmf-dist/tex/latex/l3packages/xfp/xfp.sty
(/usr/local/texlive/2023/texmf-dist/tex/latex/l3kernel/expl3.sty 774 774 (/usr/local/texlive/2023/texmf-dist/tex/latex/l3kernel/expl3.sty
Package: expl3 2023-05-22 L3 programming layer (loader) 775 775 Package: expl3 2023-05-22 L3 programming layer (loader)
776 776
(/usr/local/texlive/2023/texmf-dist/tex/latex/l3backend/l3backend-pdftex.def 777 777 (/usr/local/texlive/2023/texmf-dist/tex/latex/l3backend/l3backend-pdftex.def
File: l3backend-pdftex.def 2023-04-19 L3 backend support: PDF output (pdfTeX) 778 778 File: l3backend-pdftex.def 2023-04-19 L3 backend support: PDF output (pdfTeX)
\l__color_backend_stack_int=\count330 779 779 \l__color_backend_stack_int=\count330
\l__pdf_internal_box=\box115 780 780 \l__pdf_internal_box=\box115
)) 781 781 ))
Package: xfp 2023-02-02 L3 Floating point unit 782 782 Package: xfp 2023-02-02 L3 Floating point unit
) 783 783 )
\c@DefaultLines=\count331 784 784 \c@DefaultLines=\count331
\c@DefaultDepth=\count332 785 785 \c@DefaultDepth=\count332
\DefaultFindent=\dimen263 786 786 \DefaultFindent=\dimen263
\DefaultNindent=\dimen264 787 787 \DefaultNindent=\dimen264
\DefaultSlope=\dimen265 788 788 \DefaultSlope=\dimen265
\DiscardVskip=\dimen266 789 789 \DiscardVskip=\dimen266
\L@lbox=\box116 790 790 \L@lbox=\box116
\L@tbox=\box117 791 791 \L@tbox=\box117
\c@L@lines=\count333 792 792 \c@L@lines=\count333
\c@L@depth=\count334 793 793 \c@L@depth=\count334
\L@Pindent=\dimen267 794 794 \L@Pindent=\dimen267
\L@Findent=\dimen268 795 795 \L@Findent=\dimen268
\L@Nindent=\dimen269 796 796 \L@Nindent=\dimen269
\L@lraise=\dimen270 797 797 \L@lraise=\dimen270
\L@first=\dimen271 798 798 \L@first=\dimen271
\L@next=\dimen272 799 799 \L@next=\dimen272
\L@slope=\dimen273 800 800 \L@slope=\dimen273
\L@height=\dimen274 801 801 \L@height=\dimen274
\L@novskip=\dimen275 802 802 \L@novskip=\dimen275
\L@target@ht=\dimen276 803 803 \L@target@ht=\dimen276
\L@target@dp=\dimen277 804 804 \L@target@dp=\dimen277
\L@target@tht=\dimen278 805 805 \L@target@tht=\dimen278
\LettrineWidth=\dimen279 806 806 \LettrineWidth=\dimen279
\LettrineHeight=\dimen280 807 807 \LettrineHeight=\dimen280
\LettrineDepth=\dimen281 808 808 \LettrineDepth=\dimen281
Loading lettrine.cfg 809 809 Loading lettrine.cfg
(/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.cfg) 810 810 (/usr/local/texlive/2023/texmf-dist/tex/latex/lettrine/lettrine.cfg)
\Llist@everypar=\toks42 811 811 \Llist@everypar=\toks42
) 812 812 )
*** define extension value backcovermessage ****) 813 813 *** define extension value backcovermessage ****)
**** including upm extension spimufcphdthesis (upmext-spimufcphdthesis.cfg) *** 814 814 **** including upm extension spimufcphdthesis (upmext-spimufcphdthesis.cfg) ***
* (./upmext-spimufcphdthesis.cfg *** define extension value copyright **** 815 815 * (./upmext-spimufcphdthesis.cfg *** define extension value copyright ****
*** style extension spimufcphdthesis, Copyright {(c)} 2012--14 Dr. St\unhbox \v 816 816 *** style extension spimufcphdthesis, Copyright {(c)} 2012--14 Dr. St\unhbox \v
oidb@x \bgroup \let \unhbox \voidb@x \setbox \@tempboxa \hbox {e\global \mathch 817 817 oidb@x \bgroup \let \unhbox \voidb@x \setbox \@tempboxa \hbox {e\global \mathch
ardef \accent@spacefactor \spacefactor }\let \begingroup \let \typeout \protect 818 818 ardef \accent@spacefactor \spacefactor }\let \begingroup \let \typeout \protect
\begingroup \def \MessageBreak { 819 819 \begingroup \def \MessageBreak {
(Font) }\let \protect \immediate\write \m@ne {LaTeX Font Info: 820 820 (Font) }\let \protect \immediate\write \m@ne {LaTeX Font Info:
on input line 5.}\endgroup \endgroup \relax \let \ignorespaces \relax \accent 821 821 on input line 5.}\endgroup \endgroup \relax \let \ignorespaces \relax \accent
19 e\egroup \spacefactor \accent@spacefactor phane GALLAND. **** 822 822 19 e\egroup \spacefactor \accent@spacefactor phane GALLAND. ****
*** define extension value trademarks **** 823 823 *** define extension value trademarks ****
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/helvet.sty 824 824 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/helvet.sty
Package: helvet 2020/03/25 PSNFSS-v9.3 (WaS) 825 825 Package: helvet 2020/03/25 PSNFSS-v9.3 (WaS)
) 826 826 )
*** define extension value frontillustration **** 827 827 *** define extension value frontillustration ****
*** define extension value p3illustration **** 828 828 *** define extension value p3illustration ****
*** define extension value backillustration **** 829 829 *** define extension value backillustration ****
*** define extension value watermarksize **** 830 830 *** define extension value watermarksize ****
*** define extension value universityname **** 831 831 *** define extension value universityname ****
*** define extension value speciality **** 832 832 *** define extension value speciality ****
*** define extension value defensedate **** 833 833 *** define extension value defensedate ****
*** define extension value jurytabwidth **** 834 834 *** define extension value jurytabwidth ****
*** define extension value jurystyle **** 835 835 *** define extension value jurystyle ****
*** define extension value defensemessage ****)) 836 836 *** define extension value defensemessage ****))
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/inputenc.sty 837 837 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/inputenc.sty
Package: inputenc 2021/02/14 v1.3d Input encoding file 838 838 Package: inputenc 2021/02/14 v1.3d Input encoding file
\inpenc@prehook=\toks43 839 839 \inpenc@prehook=\toks43
\inpenc@posthook=\toks44 840 840 \inpenc@posthook=\toks44
) 841 841 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/base/fontenc.sty 842 842 (/usr/local/texlive/2023/texmf-dist/tex/latex/base/fontenc.sty
Package: fontenc 2021/04/29 v2.0v Standard LaTeX package 843 843 Package: fontenc 2021/04/29 v2.0v Standard LaTeX package
LaTeX Font Info: Trying to load font information for T1+phv on input line 11 844 844 LaTeX Font Info: Trying to load font information for T1+phv on input line 11
2. 845 845 2.
846 846
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/t1phv.fd 847 847 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/t1phv.fd
File: t1phv.fd 2020/03/25 scalable font definitions for T1/phv. 848 848 File: t1phv.fd 2020/03/25 scalable font definitions for T1/phv.
)) 849 849 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/times.sty 850 850 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/times.sty
Package: times 2020/03/25 PSNFSS-v9.3 (SPQR) 851 851 Package: times 2020/03/25 PSNFSS-v9.3 (SPQR)
) 852 852 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjustbox.sty 853 853 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjustbox.sty
Package: adjustbox 2022/10/17 v1.3a Adjusting TeX boxes (trim, clip, ...) 854 854 Package: adjustbox 2022/10/17 v1.3a Adjusting TeX boxes (trim, clip, ...)
855 855
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjcalc.sty 856 856 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/adjcalc.sty
Package: adjcalc 2012/05/16 v1.1 Provides advanced setlength with multiple back 857 857 Package: adjcalc 2012/05/16 v1.1 Provides advanced setlength with multiple back
-ends (calc, etex, pgfmath) 858 858 -ends (calc, etex, pgfmath)
) 859 859 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/trimclip.sty 860 860 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/trimclip.sty
Package: trimclip 2020/08/19 v1.2 Trim and clip general TeX material 861 861 Package: trimclip 2020/08/19 v1.2 Trim and clip general TeX material
862 862
(/usr/local/texlive/2023/texmf-dist/tex/latex/collectbox/collectbox.sty 863 863 (/usr/local/texlive/2023/texmf-dist/tex/latex/collectbox/collectbox.sty
Package: collectbox 2022/10/17 v0.4c Collect macro arguments as boxes 864 864 Package: collectbox 2022/10/17 v0.4c Collect macro arguments as boxes
\collectedbox=\box118 865 865 \collectedbox=\box118
) 866 866 )
\tc@llx=\dimen282 867 867 \tc@llx=\dimen282
\tc@lly=\dimen283 868 868 \tc@lly=\dimen283
\tc@urx=\dimen284 869 869 \tc@urx=\dimen284
\tc@ury=\dimen285 870 870 \tc@ury=\dimen285
Package trimclip Info: Using driver 'tc-pdftex.def'. 871 871 Package trimclip Info: Using driver 'tc-pdftex.def'.
872 872
(/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/tc-pdftex.def 873 873 (/usr/local/texlive/2023/texmf-dist/tex/latex/adjustbox/tc-pdftex.def
File: tc-pdftex.def 2019/01/04 v2.2 Clipping driver for pdftex 874 874 File: tc-pdftex.def 2019/01/04 v2.2 Clipping driver for pdftex
)) 875 875 ))
\adjbox@Width=\dimen286 876 876 \adjbox@Width=\dimen286
\adjbox@Height=\dimen287 877 877 \adjbox@Height=\dimen287
\adjbox@Depth=\dimen288 878 878 \adjbox@Depth=\dimen288
\adjbox@Totalheight=\dimen289 879 879 \adjbox@Totalheight=\dimen289
\adjbox@pwidth=\dimen290 880 880 \adjbox@pwidth=\dimen290
\adjbox@pheight=\dimen291 881 881 \adjbox@pheight=\dimen291
\adjbox@pdepth=\dimen292 882 882 \adjbox@pdepth=\dimen292
\adjbox@ptotalheight=\dimen293 883 883 \adjbox@ptotalheight=\dimen293
884 884
(/usr/local/texlive/2023/texmf-dist/tex/latex/ifoddpage/ifoddpage.sty 885 885 (/usr/local/texlive/2023/texmf-dist/tex/latex/ifoddpage/ifoddpage.sty
Package: ifoddpage 2022/10/18 v1.2 Conditionals for odd/even page detection 886 886 Package: ifoddpage 2022/10/18 v1.2 Conditionals for odd/even page detection
\c@checkoddpage=\count335 887 887 \c@checkoddpage=\count335
) 888 888 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/varwidth/varwidth.sty 889 889 (/usr/local/texlive/2023/texmf-dist/tex/latex/varwidth/varwidth.sty
Package: varwidth 2009/03/30 ver 0.92; Variable-width minipages 890 890 Package: varwidth 2009/03/30 ver 0.92; Variable-width minipages
\@vwid@box=\box119 891 891 \@vwid@box=\box119
\sift@deathcycles=\count336 892 892 \sift@deathcycles=\count336
\@vwid@loff=\dimen294 893 893 \@vwid@loff=\dimen294
\@vwid@roff=\dimen295 894 894 \@vwid@roff=\dimen295
)) 895 895 ))
(/usr/local/texlive/2023/texmf-dist/tex/latex/algorithms/algorithm.sty 896 896 (/usr/local/texlive/2023/texmf-dist/tex/latex/algorithms/algorithm.sty
Package: algorithm 2009/08/24 v0.1 Document Style `algorithm' - floating enviro 897 897 Package: algorithm 2009/08/24 v0.1 Document Style `algorithm' - floating enviro
nment 898 898 nment
899 899
(/usr/local/texlive/2023/texmf-dist/tex/latex/float/float.sty 900 900 (/usr/local/texlive/2023/texmf-dist/tex/latex/float/float.sty
Package: float 2001/11/08 v1.3d Float enhancements (AL) 901 901 Package: float 2001/11/08 v1.3d Float enhancements (AL)
\c@float@type=\count337 902 902 \c@float@type=\count337
\float@exts=\toks45 903 903 \float@exts=\toks45
\float@box=\box120 904 904 \float@box=\box120
\@float@everytoks=\toks46 905 905 \@float@everytoks=\toks46
\@floatcapt=\box121 906 906 \@floatcapt=\box121
) 907 907 )
\@float@every@algorithm=\toks47 908 908 \@float@every@algorithm=\toks47
\c@algorithm=\count338 909 909 \c@algorithm=\count338
) 910 910 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algpseudocode.sty 911 911 (/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algpseudocode.sty
Package: algpseudocode 912 912 Package: algpseudocode
913 913
(/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algorithmicx.sty 914 914 (/usr/local/texlive/2023/texmf-dist/tex/latex/algorithmicx/algorithmicx.sty
Package: algorithmicx 2005/04/27 v1.2 Algorithmicx 915 915 Package: algorithmicx 2005/04/27 v1.2 Algorithmicx
916 916
Document Style algorithmicx 1.2 - a greatly improved `algorithmic' style 917 917 Document Style algorithmicx 1.2 - a greatly improved `algorithmic' style
\c@ALG@line=\count339 918 918 \c@ALG@line=\count339
\c@ALG@rem=\count340 919 919 \c@ALG@rem=\count340
\c@ALG@nested=\count341 920 920 \c@ALG@nested=\count341
\ALG@tlm=\skip64 921 921 \ALG@tlm=\skip64
\ALG@thistlm=\skip65 922 922 \ALG@thistlm=\skip65
\c@ALG@Lnr=\count342 923 923 \c@ALG@Lnr=\count342
\c@ALG@blocknr=\count343 924 924 \c@ALG@blocknr=\count343
\c@ALG@storecount=\count344 925 925 \c@ALG@storecount=\count344
\c@ALG@tmpcounter=\count345 926 926 \c@ALG@tmpcounter=\count345
\ALG@tmplength=\skip66 927 927 \ALG@tmplength=\skip66
) 928 928 )
Document Style - pseudocode environments for use with the `algorithmicx' style 929 929 Document Style - pseudocode environments for use with the `algorithmicx' style
) *** define extension value defensedate **** 930 930 ) *** define extension value defensedate ****
(/usr/local/texlive/2023/texmf-dist/tex/latex/tools/layout.sty 931 931 (/usr/local/texlive/2023/texmf-dist/tex/latex/tools/layout.sty
Package: layout 2021-03-10 v1.2e Show layout parameters 932 932 Package: layout 2021-03-10 v1.2e Show layout parameters
\oneinch=\count346 933 933 \oneinch=\count346
\cnt@paperwidth=\count347 934 934 \cnt@paperwidth=\count347
\cnt@paperheight=\count348 935 935 \cnt@paperheight=\count348
\cnt@hoffset=\count349 936 936 \cnt@hoffset=\count349
\cnt@voffset=\count350 937 937 \cnt@voffset=\count350
\cnt@textheight=\count351 938 938 \cnt@textheight=\count351
\cnt@textwidth=\count352 939 939 \cnt@textwidth=\count352
\cnt@topmargin=\count353 940 940 \cnt@topmargin=\count353
\cnt@oddsidemargin=\count354 941 941 \cnt@oddsidemargin=\count354
\cnt@evensidemargin=\count355 942 942 \cnt@evensidemargin=\count355
\cnt@headheight=\count356 943 943 \cnt@headheight=\count356
\cnt@headsep=\count357 944 944 \cnt@headsep=\count357
\cnt@marginparsep=\count358 945 945 \cnt@marginparsep=\count358
\cnt@marginparwidth=\count359 946 946 \cnt@marginparwidth=\count359
\cnt@marginparpush=\count360 947 947 \cnt@marginparpush=\count360
\cnt@footskip=\count361 948 948 \cnt@footskip=\count361
\fheight=\count362 949 949 \fheight=\count362
\ref@top=\count363 950 950 \ref@top=\count363
\ref@hoffset=\count364 951 951 \ref@hoffset=\count364
\ref@voffset=\count365 952 952 \ref@voffset=\count365
\ref@head=\count366 953 953 \ref@head=\count366
\ref@body=\count367 954 954 \ref@body=\count367
\ref@foot=\count368 955 955 \ref@foot=\count368
\ref@margin=\count369 956 956 \ref@margin=\count369
\ref@marginwidth=\count370 957 957 \ref@marginwidth=\count370
\ref@marginpar=\count371 958 958 \ref@marginpar=\count371
\Interval=\count372 959 959 \Interval=\count372
\ExtraYPos=\count373 960 960 \ExtraYPos=\count373
\PositionX=\count374 961 961 \PositionX=\count374
\PositionY=\count375 962 962 \PositionY=\count375
\ArrowLength=\count376 963 963 \ArrowLength=\count376
) 964 964 )
(/usr/local/texlive/2023/texmf-dist/tex/latex/geometry/geometry.sty 965 965 (/usr/local/texlive/2023/texmf-dist/tex/latex/geometry/geometry.sty
Package: geometry 2020/01/02 v5.9 Page Geometry 966 966 Package: geometry 2020/01/02 v5.9 Page Geometry
967 967
(/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifvtex.sty 968 968 (/usr/local/texlive/2023/texmf-dist/tex/generic/iftex/ifvtex.sty
Package: ifvtex 2019/10/25 v1.7 ifvtex legacy package. Use iftex instead. 969 969 Package: ifvtex 2019/10/25 v1.7 ifvtex legacy package. Use iftex instead.
) 970 970 )
\Gm@cnth=\count377 971 971 \Gm@cnth=\count377
\Gm@cntv=\count378 972 972 \Gm@cntv=\count378
\c@Gm@tempcnt=\count379 973 973 \c@Gm@tempcnt=\count379
\Gm@bindingoffset=\dimen296 974 974 \Gm@bindingoffset=\dimen296
\Gm@wd@mp=\dimen297 975 975 \Gm@wd@mp=\dimen297
\Gm@odd@mp=\dimen298 976 976 \Gm@odd@mp=\dimen298
\Gm@even@mp=\dimen299 977 977 \Gm@even@mp=\dimen299
\Gm@layoutwidth=\dimen300 978 978 \Gm@layoutwidth=\dimen300
\Gm@layoutheight=\dimen301 979 979 \Gm@layoutheight=\dimen301
\Gm@layouthoffset=\dimen302 980 980 \Gm@layouthoffset=\dimen302
\Gm@layoutvoffset=\dimen303 981 981 \Gm@layoutvoffset=\dimen303
\Gm@dimlist=\toks48 982 982 \Gm@dimlist=\toks48
) (./main.aux 983 983 ) (./main.aux
(./chapters/contexte2.aux) (./chapters/EIAH.aux) (./chapters/CBR.aux) 984 984 (./chapters/contexte2.aux) (./chapters/EIAH.aux) (./chapters/CBR.aux)
(./chapters/Architecture.aux) (./chapters/ESCBR.aux) (./chapters/TS.aux 985 985 (./chapters/Architecture.aux) (./chapters/ESCBR.aux) (./chapters/TS.aux
986 986
LaTeX Warning: Label `eqBeta' multiply defined. 987 987 LaTeX Warning: Label `eqBeta' multiply defined.
988 988
989 989
LaTeX Warning: Label `fig:Amodel' multiply defined. 990 990 LaTeX Warning: Label `fig:Amodel' multiply defined.
991 991
992 992
LaTeX Warning: Label `fig:stabilityBP' multiply defined. 993 993 LaTeX Warning: Label `fig:stabilityBP' multiply defined.
994 994
) (./chapters/Conclusions.aux) (./chapters/Publications.aux)) 995 995 ) (./chapters/Conclusions.aux) (./chapters/Publications.aux))
\openout1 = `main.aux'. 996 996 \openout1 = `main.aux'.
997 997
LaTeX Font Info: Checking defaults for OML/txmi/m/it on input line 231. 998 998 LaTeX Font Info: Checking defaults for OML/txmi/m/it on input line 231.
LaTeX Font Info: Trying to load font information for OML+txmi on input line 999 999 LaTeX Font Info: Trying to load font information for OML+txmi on input line
231. 1000 1000 231.
1001 1001
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omltxmi.fd 1002 1002 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omltxmi.fd
File: omltxmi.fd 2000/12/15 v3.1 1003 1003 File: omltxmi.fd 2000/12/15 v3.1
) 1004 1004 )
LaTeX Font Info: ... okay on input line 231. 1005 1005 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for OMS/txsy/m/n on input line 231. 1006 1006 LaTeX Font Info: Checking defaults for OMS/txsy/m/n on input line 231.
LaTeX Font Info: Trying to load font information for OMS+txsy on input line 1007 1007 LaTeX Font Info: Trying to load font information for OMS+txsy on input line
231. 1008 1008 231.
1009 1009
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omstxsy.fd 1010 1010 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omstxsy.fd
File: omstxsy.fd 2000/12/15 v3.1 1011 1011 File: omstxsy.fd 2000/12/15 v3.1
) 1012 1012 )
LaTeX Font Info: ... okay on input line 231. 1013 1013 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 231. 1014 1014 LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1015 1015 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 231. 1016 1016 LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1017 1017 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 231. 1018 1018 LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1019 1019 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for OMX/txex/m/n on input line 231. 1020 1020 LaTeX Font Info: Checking defaults for OMX/txex/m/n on input line 231.
LaTeX Font Info: Trying to load font information for OMX+txex on input line 1021 1021 LaTeX Font Info: Trying to load font information for OMX+txex on input line
231. 1022 1022 231.
1023 1023
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omxtxex.fd 1024 1024 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/omxtxex.fd
File: omxtxex.fd 2000/12/15 v3.1 1025 1025 File: omxtxex.fd 2000/12/15 v3.1
) 1026 1026 )
LaTeX Font Info: ... okay on input line 231. 1027 1027 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for U/txexa/m/n on input line 231. 1028 1028 LaTeX Font Info: Checking defaults for U/txexa/m/n on input line 231.
LaTeX Font Info: Trying to load font information for U+txexa on input line 2 1029 1029 LaTeX Font Info: Trying to load font information for U+txexa on input line 2
31. 1030 1030 31.
1031 1031
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxexa.fd 1032 1032 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxexa.fd
File: utxexa.fd 2000/12/15 v3.1 1033 1033 File: utxexa.fd 2000/12/15 v3.1
) 1034 1034 )
LaTeX Font Info: ... okay on input line 231. 1035 1035 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 231. 1036 1036 LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1037 1037 LaTeX Font Info: ... okay on input line 231.
LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 231. 1038 1038 LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 231.
LaTeX Font Info: ... okay on input line 231. 1039 1039 LaTeX Font Info: ... okay on input line 231.
1040 1040
(/usr/local/texlive/2023/texmf-dist/tex/context/base/mkii/supp-pdf.mkii 1041 1041 (/usr/local/texlive/2023/texmf-dist/tex/context/base/mkii/supp-pdf.mkii
[Loading MPS to PDF converter (version 2006.09.02).] 1042 1042 [Loading MPS to PDF converter (version 2006.09.02).]
\scratchcounter=\count380 1043 1043 \scratchcounter=\count380
\scratchdimen=\dimen304 1044 1044 \scratchdimen=\dimen304
\scratchbox=\box122 1045 1045 \scratchbox=\box122
\nofMPsegments=\count381 1046 1046 \nofMPsegments=\count381
\nofMParguments=\count382 1047 1047 \nofMParguments=\count382
\everyMPshowfont=\toks49 1048 1048 \everyMPshowfont=\toks49
\MPscratchCnt=\count383 1049 1049 \MPscratchCnt=\count383
\MPscratchDim=\dimen305 1050 1050 \MPscratchDim=\dimen305
\MPnumerator=\count384 1051 1051 \MPnumerator=\count384
\makeMPintoPDFobject=\count385 1052 1052 \makeMPintoPDFobject=\count385
\everyMPtoPDFconversion=\toks50 1053 1053 \everyMPtoPDFconversion=\toks50
) (/usr/local/texlive/2023/texmf-dist/tex/latex/epstopdf-pkg/epstopdf-base.sty 1054 1054 ) (/usr/local/texlive/2023/texmf-dist/tex/latex/epstopdf-pkg/epstopdf-base.sty
Package: epstopdf-base 2020-01-24 v2.11 Base part for package epstopdf 1055 1055 Package: epstopdf-base 2020-01-24 v2.11 Base part for package epstopdf
Package epstopdf-base Info: Redefining graphics rule for `.eps' on input line 4 1056 1056 Package epstopdf-base Info: Redefining graphics rule for `.eps' on input line 4
85. 1057 1057 85.
1058 1058
(/usr/local/texlive/2023/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg 1059 1059 (/usr/local/texlive/2023/texmf-dist/tex/latex/latexconfig/epstopdf-sys.cfg
File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv 1060 1060 File: epstopdf-sys.cfg 2010/07/13 v1.3 Configuration of (r)epstopdf for TeX Liv
e 1061 1061 e
)) 1062 1062 ))
LaTeX Info: Redefining \degres on input line 231. 1063 1063 LaTeX Info: Redefining \degres on input line 231.
LaTeX Info: Redefining \up on input line 231. 1064 1064 LaTeX Info: Redefining \up on input line 231.
Package caption Info: Begin \AtBeginDocument code. 1065 1065 Package caption Info: Begin \AtBeginDocument code.
Package caption Info: float package is loaded. 1066 1066 Package caption Info: float package is loaded.
Package caption Info: hyperref package is loaded. 1067 1067 Package caption Info: hyperref package is loaded.
Package caption Info: picinpar package is loaded. 1068 1068 Package caption Info: picinpar package is loaded.
Package caption Info: End \AtBeginDocument code. 1069 1069 Package caption Info: End \AtBeginDocument code.
1070 1070
*** Overriding the 'enumerate' environment. Pass option 'standardlists' for avo 1071 1071 *** Overriding the 'enumerate' environment. Pass option 'standardlists' for avo
iding this override. 1072 1072 iding this override.
*** Overriding the 'description' environment. Pass option 'standardlists' for a 1073 1073 *** Overriding the 'description' environment. Pass option 'standardlists' for a
voiding this override. ************ USE CUSTOM FRONT COVER 1074 1074 voiding this override. ************ USE CUSTOM FRONT COVER
Package hyperref Info: Link coloring OFF on input line 231. 1075 1075 Package hyperref Info: Link coloring OFF on input line 231.
(./main.out) 1076 1076 (./main.out)
(./main.out) 1077 1077 (./main.out)
\@outlinefile=\write3 1078 1078 \@outlinefile=\write3
\openout3 = `main.out'. 1079 1079 \openout3 = `main.out'.
1080 1080
1081 1081
*geometry* driver: auto-detecting 1082 1082 *geometry* driver: auto-detecting
*geometry* detected driver: pdftex 1083 1083 *geometry* detected driver: pdftex
*geometry* verbose mode - [ preamble ] result: 1084 1084 *geometry* verbose mode - [ preamble ] result:
* pass: disregarded the geometry package! 1085 1085 * pass: disregarded the geometry package!
* \paperwidth=598.14806pt 1086 1086 * \paperwidth=598.14806pt
* \paperheight=845.90042pt 1087 1087 * \paperheight=845.90042pt
* \textwidth=427.43153pt 1088 1088 * \textwidth=427.43153pt
* \textheight=671.71976pt 1089 1089 * \textheight=671.71976pt
* \oddsidemargin=99.58464pt 1090 1090 * \oddsidemargin=99.58464pt
* \evensidemargin=71.13188pt 1091 1091 * \evensidemargin=71.13188pt
* \topmargin=56.9055pt 1092 1092 * \topmargin=56.9055pt
* \headheight=12.0pt 1093 1093 * \headheight=12.0pt
* \headsep=31.29802pt 1094 1094 * \headsep=31.29802pt
* \topskip=11.0pt 1095 1095 * \topskip=11.0pt
* \footskip=31.29802pt 1096 1096 * \footskip=31.29802pt
* \marginparwidth=54.2025pt 1097 1097 * \marginparwidth=54.2025pt
* \marginparsep=7.0pt 1098 1098 * \marginparsep=7.0pt
* \columnsep=10.0pt 1099 1099 * \columnsep=10.0pt
* \skip\footins=10.0pt plus 4.0pt minus 2.0pt 1100 1100 * \skip\footins=10.0pt plus 4.0pt minus 2.0pt
* \hoffset=-72.26999pt 1101 1101 * \hoffset=-72.26999pt
* \voffset=-72.26999pt 1102 1102 * \voffset=-72.26999pt
* \mag=1000 1103 1103 * \mag=1000
* \@twocolumnfalse 1104 1104 * \@twocolumnfalse
* \@twosidetrue 1105 1105 * \@twosidetrue
* \@mparswitchtrue 1106 1106 * \@mparswitchtrue
* \@reversemarginfalse 1107 1107 * \@reversemarginfalse
* (1in=72.27pt=25.4mm, 1cm=28.453pt) 1108 1108 * (1in=72.27pt=25.4mm, 1cm=28.453pt)
1109 1109
*geometry* verbose mode - [ newgeometry ] result: 1110 1110 *geometry* verbose mode - [ newgeometry ] result:
* driver: pdftex 1111 1111 * driver: pdftex
* paper: a4paper 1112 1112 * paper: a4paper
* layout: <same size as paper> 1113 1113 * layout: <same size as paper>
* layoutoffset:(h,v)=(0.0pt,0.0pt) 1114 1114 * layoutoffset:(h,v)=(0.0pt,0.0pt)
* modes: twoside 1115 1115 * modes: twoside
* h-part:(L,W,R)=(170.71652pt, 355.65306pt, 71.77847pt) 1116 1116 * h-part:(L,W,R)=(170.71652pt, 355.65306pt, 71.77847pt)
* v-part:(T,H,B)=(101.50906pt, 741.54591pt, 2.84544pt) 1117 1117 * v-part:(T,H,B)=(101.50906pt, 741.54591pt, 2.84544pt)
* \paperwidth=598.14806pt 1118 1118 * \paperwidth=598.14806pt
* \paperheight=845.90042pt 1119 1119 * \paperheight=845.90042pt
* \textwidth=355.65306pt 1120 1120 * \textwidth=355.65306pt
* \textheight=741.54591pt 1121 1121 * \textheight=741.54591pt
* \oddsidemargin=98.44653pt 1122 1122 * \oddsidemargin=98.44653pt
* \evensidemargin=-0.49152pt 1123 1123 * \evensidemargin=-0.49152pt
* \topmargin=-14.05894pt 1124 1124 * \topmargin=-14.05894pt
* \headheight=12.0pt 1125 1125 * \headheight=12.0pt
* \headsep=31.29802pt 1126 1126 * \headsep=31.29802pt
* \topskip=11.0pt 1127 1127 * \topskip=11.0pt
* \footskip=31.29802pt 1128 1128 * \footskip=31.29802pt
* \marginparwidth=54.2025pt 1129 1129 * \marginparwidth=54.2025pt
* \marginparsep=7.0pt 1130 1130 * \marginparsep=7.0pt
* \columnsep=10.0pt 1131 1131 * \columnsep=10.0pt
* \skip\footins=10.0pt plus 4.0pt minus 2.0pt 1132 1132 * \skip\footins=10.0pt plus 4.0pt minus 2.0pt
* \hoffset=-72.26999pt 1133 1133 * \hoffset=-72.26999pt
* \voffset=-72.26999pt 1134 1134 * \voffset=-72.26999pt
* \mag=1000 1135 1135 * \mag=1000
* \@twocolumnfalse 1136 1136 * \@twocolumnfalse
* \@twosidetrue 1137 1137 * \@twosidetrue
* \@mparswitchtrue 1138 1138 * \@mparswitchtrue
* \@reversemarginfalse 1139 1139 * \@reversemarginfalse
* (1in=72.27pt=25.4mm, 1cm=28.453pt) 1140 1140 * (1in=72.27pt=25.4mm, 1cm=28.453pt)
1141 1141
<images_logos/image1_logoUBFC_grand.png, id=385, 610.4406pt x 217.0509pt> 1142 1142 <images_logos/image1_logoUBFC_grand.png, id=385, 610.4406pt x 217.0509pt>
File: images_logos/image1_logoUBFC_grand.png Graphic file (type png) 1143 1143 File: images_logos/image1_logoUBFC_grand.png Graphic file (type png)
<use images_logos/image1_logoUBFC_grand.png> 1144 1144 <use images_logos/image1_logoUBFC_grand.png>
Package pdftex.def Info: images_logos/image1_logoUBFC_grand.png used on input 1145 1145 Package pdftex.def Info: images_logos/image1_logoUBFC_grand.png used on input
line 237. 1146 1146 line 237.
(pdftex.def) Requested size: 142.25905pt x 50.57973pt. 1147 1147 (pdftex.def) Requested size: 142.25905pt x 50.57973pt.
<images_logos/logo_UFC_2018_transparence.png, id=387, 104.5506pt x 34.6896pt> 1148 1148 <images_logos/logo_UFC_2018_transparence.png, id=387, 104.5506pt x 34.6896pt>
File: images_logos/logo_UFC_2018_transparence.png Graphic file (type png) 1149 1149 File: images_logos/logo_UFC_2018_transparence.png Graphic file (type png)
<use images_logos/logo_UFC_2018_transparence.png> 1150 1150 <use images_logos/logo_UFC_2018_transparence.png>
Package pdftex.def Info: images_logos/logo_UFC_2018_transparence.png used on i 1151 1151 Package pdftex.def Info: images_logos/logo_UFC_2018_transparence.png used on i
nput line 237. 1152 1152 nput line 237.
(pdftex.def) Requested size: 142.25905pt x 47.20264pt. 1153 1153 (pdftex.def) Requested size: 142.25905pt x 47.20264pt.
LaTeX Font Info: Trying to load font information for OT1+txr on input line 2 1154 1154 LaTeX Font Info: Trying to load font information for OT1+txr on input line 2
48. 1155 1155 48.
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/ot1txr.fd 1156 1156 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/ot1txr.fd
File: ot1txr.fd 2000/12/15 v3.1 1157 1157 File: ot1txr.fd 2000/12/15 v3.1
) 1158 1158 )
LaTeX Font Info: Trying to load font information for U+txmia on input line 2 1159 1159 LaTeX Font Info: Trying to load font information for U+txmia on input line 2
48. 1160 1160 48.
1161 1161
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxmia.fd 1162 1162 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxmia.fd
File: utxmia.fd 2000/12/15 v3.1 1163 1163 File: utxmia.fd 2000/12/15 v3.1
) 1164 1164 )
LaTeX Font Info: Trying to load font information for U+txsya on input line 2 1165 1165 LaTeX Font Info: Trying to load font information for U+txsya on input line 2
48. 1166 1166 48.
1167 1167
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsya.fd 1168 1168 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsya.fd
File: utxsya.fd 2000/12/15 v3.1 1169 1169 File: utxsya.fd 2000/12/15 v3.1
) 1170 1170 )
LaTeX Font Info: Trying to load font information for U+txsyb on input line 2 1171 1171 LaTeX Font Info: Trying to load font information for U+txsyb on input line 2
48. 1172 1172 48.
1173 1173
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyb.fd 1174 1174 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyb.fd
File: utxsyb.fd 2000/12/15 v3.1 1175 1175 File: utxsyb.fd 2000/12/15 v3.1
) 1176 1176 )
LaTeX Font Info: Trying to load font information for U+txsyc on input line 2 1177 1177 LaTeX Font Info: Trying to load font information for U+txsyc on input line 2
48. 1178 1178 48.
1179 1179
(/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyc.fd 1180 1180 (/usr/local/texlive/2023/texmf-dist/tex/latex/txfonts/utxsyc.fd
File: utxsyc.fd 2000/12/15 v3.1 1181 1181 File: utxsyc.fd 2000/12/15 v3.1
) [1 1182 1182 ) [1
1183 1183
1184 1184
1185 1185
1186 1186
{/usr/local/texlive/2023/texmf-var/fonts/map/pdftex/updmap/pdftex.map}{/usr/loc 1187 1187 {/usr/local/texlive/2023/texmf-var/fonts/map/pdftex/updmap/pdftex.map}{/usr/loc
al/texlive/2023/texmf-dist/fonts/enc/dvips/base/8r.enc} <./images_logos/image1_ 1188 1188 al/texlive/2023/texmf-dist/fonts/enc/dvips/base/8r.enc} <./images_logos/image1_
logoUBFC_grand.png> <./images_logos/logo_UFC_2018_transparence.png>] [2 1189 1189 logoUBFC_grand.png> <./images_logos/logo_UFC_2018_transparence.png>] [2
1190 1190
1191 1191
] [3] [4] 1192 1192 ] [3] [4]
(./main.toc 1193 1193 (./main.toc
LaTeX Font Info: Font shape `T1/phv/m/it' in size <10.95> not available 1194 1194 LaTeX Font Info: Font shape `T1/phv/m/it' in size <10.95> not available
(Font) Font shape `T1/phv/m/sl' tried instead on input line 23. 1195 1195 (Font) Font shape `T1/phv/m/sl' tried instead on input line 23.
[5 1196 1196 [5
1197 1197
] [6] 1198 1198 ] [6]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1199 1199 Underfull \vbox (badness 10000) has occurred while \output is active []
1200 1200
[7] 1201 1201 [7]
Overfull \hbox (1.29184pt too wide) detected at line 89 1202 1202 Overfull \hbox (1.29184pt too wide) detected at line 89
[][]\T1/phv/m/n/10.95 100[] 1203 1203 [][]\T1/phv/m/n/10.95 100[]
[] 1204 1204 []
1205 1205
1206 1206
Overfull \hbox (1.29184pt too wide) detected at line 90 1207 1207 Overfull \hbox (1.29184pt too wide) detected at line 90
[][]\T1/phv/m/n/10.95 100[] 1208 1208 [][]\T1/phv/m/n/10.95 100[]
[] 1209 1209 []
1210 1210
1211 1211
Overfull \hbox (1.29184pt too wide) detected at line 92 1212 1212 Overfull \hbox (1.29184pt too wide) detected at line 92
[][]\T1/phv/m/n/10.95 103[] 1213 1213 [][]\T1/phv/m/n/10.95 103[]
[] 1214 1214 []
1215 1215
1216 1216
Overfull \hbox (1.29184pt too wide) detected at line 93 1217 1217 Overfull \hbox (1.29184pt too wide) detected at line 93
[][]\T1/phv/m/n/10.95 104[] 1218 1218 [][]\T1/phv/m/n/10.95 104[]
[] 1219 1219 []
1220 1220
1221 1221
Overfull \hbox (1.29184pt too wide) detected at line 95 1222 1222 Overfull \hbox (1.29184pt too wide) detected at line 95
[][]\T1/phv/m/n/10.95 105[] 1223 1223 [][]\T1/phv/m/n/10.95 105[]
[] 1224 1224 []
1225 1225
1226 1226
Overfull \hbox (1.29184pt too wide) detected at line 96 1227 1227 Overfull \hbox (1.29184pt too wide) detected at line 96
[][]\T1/phv/m/n/10.95 106[] 1228 1228 [][]\T1/phv/m/n/10.95 106[]
[] 1229 1229 []
1230 1230
) 1231 1231 )
\tf@toc=\write4 1232 1232 \tf@toc=\write4
\openout4 = `main.toc'. 1233 1233 \openout4 = `main.toc'.
1234 1234
[8] [1 1235 1235 [8] [1
1236 1236
1237 1237
] [2] 1238 1238 ] [2]
Chapitre 1. 1239 1239 Chapitre 1.
Package lettrine.sty Info: Targeted height = 19.96736pt 1240 1240 Package lettrine.sty Info: Targeted height = 19.96736pt
(lettrine.sty) (for loversize=0, accent excluded), 1241 1241 (lettrine.sty) (for loversize=0, accent excluded),
(lettrine.sty) Lettrine height = 20.612pt (\uppercase {C}); 1242 1242 (lettrine.sty) Lettrine height = 20.612pt (\uppercase {C});
(lettrine.sty) reported on input line 340. 1243 1243 (lettrine.sty) reported on input line 340.
1244 1244
Overfull \hbox (6.79999pt too wide) in paragraph at lines 340--340 1245 1245 Overfull \hbox (6.79999pt too wide) in paragraph at lines 340--340
[][][][] 1246 1246 [][][][]
[] 1247 1247 []
1248 1248
1249 1249
Underfull \vbox (badness 10000) has occurred while \output is active [] 1250 1250 Underfull \vbox (badness 10000) has occurred while \output is active []
1251 1251
[3 1252 1252 [3
1253 1253
] 1254 1254 ]
[4] [5] 1255 1255 [4] [5]
\openout2 = `./chapters/contexte2.aux'. 1256 1256 \openout2 = `./chapters/contexte2.aux'.
1257 1257
(./chapters/contexte2.tex [6 1258 1258 (./chapters/contexte2.tex [6
1259 1259
1260 1260
] 1261 1261 ]
Chapitre 2. 1262 1262 Chapitre 2.
<./Figures/TLearning.png, id=558, 603.25375pt x 331.2375pt> 1263 1263 <./Figures/TLearning.png, id=558, 603.25375pt x 331.2375pt>
File: ./Figures/TLearning.png Graphic file (type png) 1264 1264 File: ./Figures/TLearning.png Graphic file (type png)
<use ./Figures/TLearning.png> 1265 1265 <use ./Figures/TLearning.png>
Package pdftex.def Info: ./Figures/TLearning.png used on input line 15. 1266 1266 Package pdftex.def Info: ./Figures/TLearning.png used on input line 15.
(pdftex.def) Requested size: 427.43153pt x 234.69505pt. 1267 1267 (pdftex.def) Requested size: 427.43153pt x 234.69505pt.
[7] 1268 1268 [7]
<./Figures/EIAH.png, id=567, 643.40375pt x 362.35374pt> 1269 1269 <./Figures/EIAH.png, id=567, 643.40375pt x 362.35374pt>
File: ./Figures/EIAH.png Graphic file (type png) 1270 1270 File: ./Figures/EIAH.png Graphic file (type png)
<use ./Figures/EIAH.png> 1271 1271 <use ./Figures/EIAH.png>
Package pdftex.def Info: ./Figures/EIAH.png used on input line 32. 1272 1272 Package pdftex.def Info: ./Figures/EIAH.png used on input line 32.
(pdftex.def) Requested size: 427.43153pt x 240.73pt. 1273 1273 (pdftex.def) Requested size: 427.43153pt x 240.73pt.
1274 1274
1275 1275
LaTeX Warning: `!h' float specifier changed to `!ht'. 1276 1276 LaTeX Warning: `!h' float specifier changed to `!ht'.
1277 1277
[8 <./Figures/TLearning.png>] [9 <./Figures/EIAH.png>] [10] 1278 1278 [8 <./Figures/TLearning.png>] [9 <./Figures/EIAH.png>] [10]
<./Figures/cycle.png, id=594, 668.4975pt x 665.48625pt> 1279 1279 <./Figures/cycle.png, id=594, 668.4975pt x 665.48625pt>
File: ./Figures/cycle.png Graphic file (type png) 1280 1280 File: ./Figures/cycle.png Graphic file (type png)
<use ./Figures/cycle.png> 1281 1281 <use ./Figures/cycle.png>
Package pdftex.def Info: ./Figures/cycle.png used on input line 83. 1282 1282 Package pdftex.def Info: ./Figures/cycle.png used on input line 83.
(pdftex.def) Requested size: 427.43153pt x 425.51372pt. 1283 1283 (pdftex.def) Requested size: 427.43153pt x 425.51372pt.
[11 <./Figures/cycle.png>] 1284 1284 [11 <./Figures/cycle.png>]
<./Figures/Reuse.png, id=617, 383.4325pt x 182.6825pt> 1285 1285 <./Figures/Reuse.png, id=617, 383.4325pt x 182.6825pt>
File: ./Figures/Reuse.png Graphic file (type png) 1286 1286 File: ./Figures/Reuse.png Graphic file (type png)
<use ./Figures/Reuse.png> 1287 1287 <use ./Figures/Reuse.png>
Package pdftex.def Info: ./Figures/Reuse.png used on input line 112. 1288 1288 Package pdftex.def Info: ./Figures/Reuse.png used on input line 112.
(pdftex.def) Requested size: 427.43153pt x 203.65802pt. 1289 1289 (pdftex.def) Requested size: 427.43153pt x 203.65802pt.
1290 1290
Underfull \hbox (badness 10000) in paragraph at lines 112--112 1291 1291 Underfull \hbox (badness 10000) in paragraph at lines 112--112
[]\T1/phv/m/sc/10.95 Figure 2.4 \T1/phv/m/n/10.95 ^^U |Prin-cipe de réuti-li-sa 1292 1292 []\T1/phv/m/sc/10.95 Figure 2.4 \T1/phv/m/n/10.95 ^^U |Prin-cipe de réuti-li-sa
-tion dans le RàPC (Tra-duit de 1293 1293 -tion dans le RàPC (Tra-duit de
[] 1294 1294 []
1295 1295
[12] [13 <./Figures/Reuse.png>] 1296 1296 [12] [13 <./Figures/Reuse.png>]
<./Figures/CycleCBR.png, id=637, 147.1899pt x 83.8332pt> 1297 1297 <./Figures/CycleCBR.png, id=637, 147.1899pt x 83.8332pt>
File: ./Figures/CycleCBR.png Graphic file (type png) 1298 1298 File: ./Figures/CycleCBR.png Graphic file (type png)
<use ./Figures/CycleCBR.png> 1299 1299 <use ./Figures/CycleCBR.png>
Package pdftex.def Info: ./Figures/CycleCBR.png used on input line 156. 1300 1300 Package pdftex.def Info: ./Figures/CycleCBR.png used on input line 156.
(pdftex.def) Requested size: 427.43153pt x 243.45026pt. 1301 1301 (pdftex.def) Requested size: 427.43153pt x 243.45026pt.
[14] [15 <./Figures/CycleCBR.png>] [16] 1302 1302 [14] [15 <./Figures/CycleCBR.png>] [16]
1303 1303
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1304 1304 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1305 1305 65.
1306 1306
LaTeX Font Info: Trying to load font information for TS1+phv on input line 2 1307 1307 LaTeX Font Info: Trying to load font information for TS1+phv on input line 2
65. 1308 1308 65.
(/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/ts1phv.fd 1309 1309 (/usr/local/texlive/2023/texmf-dist/tex/latex/psnfss/ts1phv.fd
File: ts1phv.fd 2020/03/25 scalable font definitions for TS1/phv. 1310 1310 File: ts1phv.fd 2020/03/25 scalable font definitions for TS1/phv.
) 1311 1311 )
1312 1312
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1313 1313 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1314 1314 65.
1315 1315
1316 1316
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1317 1317 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1318 1318 65.
1319 1319
1320 1320
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1321 1321 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1322 1322 65.
1323 1323
1324 1324
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1325 1325 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1326 1326 65.
1327 1327
1328 1328
LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2 1329 1329 LaTeX Warning: Command \textperiodcentered invalid in math mode on input line 2
65. 1330 1330 65.
1331 1331
Missing character: There is no · in font txr! 1332 1332 Missing character: There is no · in font txr!
Missing character: There is no · in font txr! 1333 1333 Missing character: There is no · in font txr!
Missing character: There is no · in font txr! 1334 1334 Missing character: There is no · in font txr!
1335 1335
LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined 1336 1336 LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined
(Font) using `T1/phv/m/it' instead on input line 284. 1337 1337 (Font) using `T1/phv/m/it' instead on input line 284.
1338 1338
[17] [18] 1339 1339 [17] [18]
1340 1340
LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined 1341 1341 LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined
(Font) using `T1/phv/m/it' instead on input line 333. 1342 1342 (Font) using `T1/phv/m/it' instead on input line 333.
1343 1343
1344 1344
LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined 1345 1345 LaTeX Font Warning: Font shape `T1/phv/m/scit' undefined
(Font) using `T1/phv/m/it' instead on input line 337. 1346 1346 (Font) using `T1/phv/m/it' instead on input line 337.
1347 1347
<./Figures/beta-distribution.png, id=714, 621.11293pt x 480.07928pt> 1348 1348 <./Figures/beta-distribution.png, id=714, 621.11293pt x 480.07928pt>
File: ./Figures/beta-distribution.png Graphic file (type png) 1349 1349 File: ./Figures/beta-distribution.png Graphic file (type png)
<use ./Figures/beta-distribution.png> 1350 1350 <use ./Figures/beta-distribution.png>
Package pdftex.def Info: ./Figures/beta-distribution.png used on input line 34 1351 1351 Package pdftex.def Info: ./Figures/beta-distribution.png used on input line 34
5. 1352 1352 5.
(pdftex.def) Requested size: 427.43153pt x 330.38333pt. 1353 1353 (pdftex.def) Requested size: 427.43153pt x 330.38333pt.
[19]) [20 <./Figures/beta-distribution.png>] [21 1354 1354 [19]) [20 <./Figures/beta-distribution.png>] [21
1355 1355
1356 1356
1357 1357
] [22] 1358 1358 ] [22]
\openout2 = `./chapters/EIAH.aux'. 1359 1359 \openout2 = `./chapters/EIAH.aux'.
1360 1360
(./chapters/EIAH.tex 1361 1361 (./chapters/EIAH.tex
Chapitre 3. 1362 1362 Chapitre 3.
[23 1363 1363 [23
1364 1364
1365 1365
] 1366 1366 ]
Underfull \hbox (badness 10000) in paragraph at lines 24--25 1367 1367 Underfull \hbox (badness 10000) in paragraph at lines 24--25
[]\T1/phv/m/n/10.95 Les tech-niques d'IA peuvent aussi ai-der à prendre des dé- 1368 1368 []\T1/phv/m/n/10.95 Les tech-niques d'IA peuvent aussi ai-der à prendre des dé-
ci-sions stra-té- 1369 1369 ci-sions stra-té-
[] 1370 1370 []
1371 1371
1372 1372
Underfull \hbox (badness 1874) in paragraph at lines 24--25 1373 1373 Underfull \hbox (badness 1874) in paragraph at lines 24--25
\T1/phv/m/n/10.95 giques vi-sant des ob-jec-tifs à longue échéance comme le mon 1374 1374 \T1/phv/m/n/10.95 giques vi-sant des ob-jec-tifs à longue échéance comme le mon
tre le tra-vail de 1375 1375 tre le tra-vail de
[] 1376 1376 []
1377 1377
<./Figures/architecture.png, id=752, 776.9025pt x 454.69875pt> 1378 1378 <./Figures/architecture.png, id=752, 776.9025pt x 454.69875pt>
File: ./Figures/architecture.png Graphic file (type png) 1379 1379 File: ./Figures/architecture.png Graphic file (type png)
<use ./Figures/architecture.png> 1380 1380 <use ./Figures/architecture.png>
Package pdftex.def Info: ./Figures/architecture.png used on input line 38. 1381 1381 Package pdftex.def Info: ./Figures/architecture.png used on input line 38.
(pdftex.def) Requested size: 427.43153pt x 250.16833pt. 1382 1382 (pdftex.def) Requested size: 427.43153pt x 250.16833pt.
1383 1383
LaTeX Warning: Reference `sectBanditManchot' on page 24 undefined on input line 1384 1384 LaTeX Warning: Reference `sectBanditManchot' on page 24 undefined on input line
43. 1385 1385 43.
1386 1386
[24] 1387 1387 [24]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1388 1388 Underfull \vbox (badness 10000) has occurred while \output is active []
1389 1389
[25 <./Figures/architecture.png>] 1390 1390 [25 <./Figures/architecture.png>]
<./Figures/ELearningLevels.png, id=781, 602.25pt x 612.78937pt> 1391 1391 <./Figures/ELearningLevels.png, id=781, 602.25pt x 612.78937pt>
File: ./Figures/ELearningLevels.png Graphic file (type png) 1392 1392 File: ./Figures/ELearningLevels.png Graphic file (type png)
<use ./Figures/ELearningLevels.png> 1393 1393 <use ./Figures/ELearningLevels.png>
Package pdftex.def Info: ./Figures/ELearningLevels.png used on input line 62. 1394 1394 Package pdftex.def Info: ./Figures/ELearningLevels.png used on input line 62.
(pdftex.def) Requested size: 427.43153pt x 434.92455pt. 1395 1395 (pdftex.def) Requested size: 427.43153pt x 434.92455pt.
1396 1396
Underfull \hbox (badness 3690) in paragraph at lines 62--62 1397 1397 Underfull \hbox (badness 3690) in paragraph at lines 62--62
[]\T1/phv/m/sc/10.95 Figure 3.2 \T1/phv/m/n/10.95 ^^U |Tra-duc-tion des ni-veau 1398 1398 []\T1/phv/m/sc/10.95 Figure 3.2 \T1/phv/m/n/10.95 ^^U |Tra-duc-tion des ni-veau
x du sys-tème de re-com-man-da-tion dans 1399 1399 x du sys-tème de re-com-man-da-tion dans
[] 1400 1400 []
1401 1401
1402 1402
Underfull \vbox (badness 10000) has occurred while \output is active [] 1403 1403 Underfull \vbox (badness 10000) has occurred while \output is active []
1404 1404
[26] 1405 1405 [26]
Overfull \hbox (2.56369pt too wide) in paragraph at lines 82--82 1406 1406 Overfull \hbox (2.56369pt too wide) in paragraph at lines 82--82
[]|\T1/phv/m/n/9 [[]]| 1407 1407 []|\T1/phv/m/n/9 [[]]|
[] 1408 1408 []
1409 1409
1410 1410
Overfull \hbox (0.5975pt too wide) in paragraph at lines 77--93 1411 1411 Overfull \hbox (0.5975pt too wide) in paragraph at lines 77--93
[][] 1412 1412 [][]
[] 1413 1413 []
1414 1414
) [27 <./Figures/ELearningLevels.png>] [28] 1415 1415 ) [27 <./Figures/ELearningLevels.png>] [28]
\openout2 = `./chapters/CBR.aux'. 1416 1416 \openout2 = `./chapters/CBR.aux'.
1417 1417
(./chapters/CBR.tex 1418 1418 (./chapters/CBR.tex
Chapitre 4. 1419 1419 Chapitre 4.
[29 1420 1420 [29
1421 1421
1422 1422
1423 1423
1424 1424
] [30] 1425 1425 ] [30]
Underfull \hbox (badness 1048) in paragraph at lines 25--26 1426 1426 Underfull \hbox (badness 1048) in paragraph at lines 25--26
[]\T1/phv/m/n/10.95 [[]] uti-lisent éga-le-ment le RàPC pour sé-lec-tion-ner la 1427 1427 []\T1/phv/m/n/10.95 [[]] uti-lisent éga-le-ment le RàPC pour sé-lec-tion-ner la
1428 1428
[] 1429 1429 []
1430 1430
<./Figures/ModCBR2.png, id=845, 1145.27875pt x 545.03625pt> 1431 1431 <./Figures/ModCBR2.png, id=845, 1145.27875pt x 545.03625pt>
File: ./Figures/ModCBR2.png Graphic file (type png) 1432 1432 File: ./Figures/ModCBR2.png Graphic file (type png)
<use ./Figures/ModCBR2.png> 1433 1433 <use ./Figures/ModCBR2.png>
Package pdftex.def Info: ./Figures/ModCBR2.png used on input line 39. 1434 1434 Package pdftex.def Info: ./Figures/ModCBR2.png used on input line 39.
(pdftex.def) Requested size: 427.43153pt x 203.41505pt. 1435 1435 (pdftex.def) Requested size: 427.43153pt x 203.41505pt.
1436 1436
Underfull \vbox (badness 1163) has occurred while \output is active [] 1437 1437 Underfull \vbox (badness 1163) has occurred while \output is active []
1438 1438
1439 1439
Overfull \hbox (24.44536pt too wide) has occurred while \output is active 1440 1440 Overfull \hbox (24.44536pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 4.3. TRAVAUX RÉCENTS SUR LA REPRÉSENTATION DES CAS ET LE CY 1441 1441 \T1/phv/m/sl/10.95 4.3. TRAVAUX RÉCENTS SUR LA REPRÉSENTATION DES CAS ET LE CY
CLE DU RÀPC \T1/phv/m/n/10.95 31 1442 1442 CLE DU RÀPC \T1/phv/m/n/10.95 31
[] 1443 1443 []
1444 1444
[31] 1445 1445 [31]
<./Figures/ModCBR1.png, id=859, 942.52126pt x 624.83438pt> 1446 1446 <./Figures/ModCBR1.png, id=859, 942.52126pt x 624.83438pt>
File: ./Figures/ModCBR1.png Graphic file (type png) 1447 1447 File: ./Figures/ModCBR1.png Graphic file (type png)
<use ./Figures/ModCBR1.png> 1448 1448 <use ./Figures/ModCBR1.png>
Package pdftex.def Info: ./Figures/ModCBR1.png used on input line 45. 1449 1449 Package pdftex.def Info: ./Figures/ModCBR1.png used on input line 45.
(pdftex.def) Requested size: 427.43153pt x 283.36574pt. 1450 1450 (pdftex.def) Requested size: 427.43153pt x 283.36574pt.
[32 <./Figures/ModCBR2.png>] [33 <./Figures/ModCBR1.png>] [34] 1451 1451 [32 <./Figures/ModCBR2.png>] [33 <./Figures/ModCBR1.png>] [34]
<./Figures/taxonomieEIAH.png, id=900, 984.67876pt x 614.295pt> 1452 1452 <./Figures/taxonomieEIAH.png, id=900, 984.67876pt x 614.295pt>
File: ./Figures/taxonomieEIAH.png Graphic file (type png) 1453 1453 File: ./Figures/taxonomieEIAH.png Graphic file (type png)
<use ./Figures/taxonomieEIAH.png> 1454 1454 <use ./Figures/taxonomieEIAH.png>
Package pdftex.def Info: ./Figures/taxonomieEIAH.png used on input line 81. 1455 1455 Package pdftex.def Info: ./Figures/taxonomieEIAH.png used on input line 81.
(pdftex.def) Requested size: 427.43153pt x 266.65376pt. 1456 1456 (pdftex.def) Requested size: 427.43153pt x 266.65376pt.
1457 1457
Underfull \hbox (badness 1895) in paragraph at lines 90--90 1458 1458 Underfull \hbox (badness 1895) in paragraph at lines 90--90
[][]\T1/phv/m/sc/14.4 Récapitulatif des li-mites des tra-vaux pré-sen-tés 1459 1459 [][]\T1/phv/m/sc/14.4 Récapitulatif des li-mites des tra-vaux pré-sen-tés
[] 1460 1460 []
1461 1461
[35] 1462 1462 [35]
Overfull \hbox (2.19226pt too wide) in paragraph at lines 108--108 1463 1463 Overfull \hbox (2.19226pt too wide) in paragraph at lines 108--108
[]|\T1/phv/m/n/9 [[]]| 1464 1464 []|\T1/phv/m/n/9 [[]]|
[] 1465 1465 []
1466 1466
1467 1467
Overfull \hbox (8.65419pt too wide) in paragraph at lines 114--114 1468 1468 Overfull \hbox (8.65419pt too wide) in paragraph at lines 114--114
[]|\T1/phv/m/n/9 [[]]| 1469 1469 []|\T1/phv/m/n/9 [[]]|
[] 1470 1470 []
1471 1471
1472 1472
Overfull \hbox (1.23834pt too wide) in paragraph at lines 134--134 1473 1473 Overfull \hbox (1.23834pt too wide) in paragraph at lines 134--134
[]|\T1/phv/m/n/9 [[]]| 1474 1474 []|\T1/phv/m/n/9 [[]]|
[] 1475 1475 []
1476 1476
1477 1477
Overfull \hbox (7.38495pt too wide) in paragraph at lines 142--142 1478 1478 Overfull \hbox (7.38495pt too wide) in paragraph at lines 142--142
[]|\T1/phv/m/n/9 [[]]| 1479 1479 []|\T1/phv/m/n/9 [[]]|
[] 1480 1480 []
1481 1481
) [36 <./Figures/taxonomieEIAH.png>] 1482 1482 ) [36 <./Figures/taxonomieEIAH.png>]
Overfull \hbox (14.11055pt too wide) has occurred while \output is active 1483 1483 Overfull \hbox (14.11055pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 4.7. RÉCAPITULATIF DES LIMITES DES TRAVAUX PRÉSENTÉS DANS C 1484 1484 \T1/phv/m/sl/10.95 4.7. RÉCAPITULATIF DES LIMITES DES TRAVAUX PRÉSENTÉS DANS C
E CHAPITRE \T1/phv/m/n/10.95 37 1485 1485 E CHAPITRE \T1/phv/m/n/10.95 37
[] 1486 1486 []
1487 1487
[37] [38 1488 1488 [37] [38
1489 1489
1490 1490
1491 1491
] [39] [40] 1492 1492 ] [39] [40]
\openout2 = `./chapters/Architecture.aux'. 1493 1493 \openout2 = `./chapters/Architecture.aux'.
1494 1494
(./chapters/Architecture.tex 1495 1495 (./chapters/Architecture.tex
Chapitre 5. 1496 1496 Chapitre 5.
1497 1497
Underfull \vbox (badness 10000) has occurred while \output is active [] 1498 1498 Underfull \vbox (badness 10000) has occurred while \output is active []
1499 1499
[41 1500 1500 [41
1501 1501
1502 1502
] 1503 1503 ]
<./Figures/AIVT.png, id=976, 1116.17pt x 512.91624pt> 1504 1504 <./Figures/AIVT.png, id=976, 1116.17pt x 512.91624pt>
File: ./Figures/AIVT.png Graphic file (type png) 1505 1505 File: ./Figures/AIVT.png Graphic file (type png)
<use ./Figures/AIVT.png> 1506 1506 <use ./Figures/AIVT.png>
Package pdftex.def Info: ./Figures/AIVT.png used on input line 23. 1507 1507 Package pdftex.def Info: ./Figures/AIVT.png used on input line 23.
(pdftex.def) Requested size: 427.43153pt x 196.41287pt. 1508 1508 (pdftex.def) Requested size: 427.43153pt x 196.41287pt.
1509 1509
[42 <./Figures/AIVT.png>] 1510 1510 [42 <./Figures/AIVT.png>]
Underfull \hbox (badness 3049) in paragraph at lines 44--45 1511 1511 Underfull \hbox (badness 3049) in paragraph at lines 44--45
[]|\T1/phv/m/n/10.95 Discipline des in-for-ma-tions conte- 1512 1512 []|\T1/phv/m/n/10.95 Discipline des in-for-ma-tions conte-
[] 1513 1513 []
1514 1514
1515 1515
Underfull \hbox (badness 2435) in paragraph at lines 46--46 1516 1516 Underfull \hbox (badness 2435) in paragraph at lines 46--46
[]|\T1/phv/m/n/10.95 Le ni-veau sco-laire de la ma-tière 1517 1517 []|\T1/phv/m/n/10.95 Le ni-veau sco-laire de la ma-tière
[] 1518 1518 []
1519 1519
1520 1520
Underfull \hbox (badness 7468) in paragraph at lines 47--48 1521 1521 Underfull \hbox (badness 7468) in paragraph at lines 47--48
[]|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis- 1522 1522 []|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis-
[] 1523 1523 []
1524 1524
1525 1525
Underfull \hbox (badness 7468) in paragraph at lines 48--49 1526 1526 Underfull \hbox (badness 7468) in paragraph at lines 48--49
[]|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis- 1527 1527 []|\T1/phv/m/n/10.95 Professeur, Ad-mi-nis-
[] 1528 1528 []
1529 1529
1530 1530
Underfull \hbox (badness 5050) in paragraph at lines 52--52 1531 1531 Underfull \hbox (badness 5050) in paragraph at lines 52--52
[]|\T1/phv/m/n/10.95 Le type d'in-for-ma-tions conte-nues 1532 1532 []|\T1/phv/m/n/10.95 Le type d'in-for-ma-tions conte-nues
[] 1533 1533 []
1534 1534
1535 1535
Underfull \hbox (badness 10000) in paragraph at lines 54--55 1536 1536 Underfull \hbox (badness 10000) in paragraph at lines 54--55
[]|\T1/phv/m/n/10.95 Connaissances et 1537 1537 []|\T1/phv/m/n/10.95 Connaissances et
[] 1538 1538 []
1539 1539
1540 1540
Overfull \hbox (1.98096pt too wide) in paragraph at lines 57--57 1541 1541 Overfull \hbox (1.98096pt too wide) in paragraph at lines 57--57
[]|\T1/phv/m/n/10.95 Représentation 1542 1542 []|\T1/phv/m/n/10.95 Représentation
[] 1543 1543 []
1544 1544
1545 1545
Overfull \hbox (1.98096pt too wide) in paragraph at lines 58--58 1546 1546 Overfull \hbox (1.98096pt too wide) in paragraph at lines 58--58
[]|\T1/phv/m/n/10.95 Représentation 1547 1547 []|\T1/phv/m/n/10.95 Représentation
[] 1548 1548 []
1549 1549
1550 1550
Underfull \hbox (badness 10000) in paragraph at lines 59--60 1551 1551 Underfull \hbox (badness 10000) in paragraph at lines 59--60
[]|\T1/phv/m/n/10.95 Représentation tex- 1552 1552 []|\T1/phv/m/n/10.95 Représentation tex-
[] 1553 1553 []
1554 1554
1555 1555
Underfull \hbox (badness 10000) in paragraph at lines 59--60 1556 1556 Underfull \hbox (badness 10000) in paragraph at lines 59--60
\T1/phv/m/n/10.95 tuel et gra-phique 1557 1557 \T1/phv/m/n/10.95 tuel et gra-phique
[] 1558 1558 []
1559 1559
1560 1560
Underfull \hbox (badness 2343) in paragraph at lines 63--64 1561 1561 Underfull \hbox (badness 2343) in paragraph at lines 63--64
[]|\T1/phv/m/n/10.95 Ordinateur ou ap-pa- 1562 1562 []|\T1/phv/m/n/10.95 Ordinateur ou ap-pa-
[] 1563 1563 []
1564 1564
1565 1565
Underfull \vbox (badness 10000) has occurred while \output is active [] 1566 1566 Underfull \vbox (badness 10000) has occurred while \output is active []
1567 1567
[43] 1568 1568 [43]
<./Figures/Architecture AI-VT2.png, id=992, 1029.8475pt x 948.54375pt> 1569 1569 <./Figures/Architecture AI-VT2.png, id=992, 1029.8475pt x 948.54375pt>
File: ./Figures/Architecture AI-VT2.png Graphic file (type png) 1570 1570 File: ./Figures/Architecture AI-VT2.png Graphic file (type png)
<use ./Figures/Architecture AI-VT2.png> 1571 1571 <use ./Figures/Architecture AI-VT2.png>
Package pdftex.def Info: ./Figures/Architecture AI-VT2.png used on input line 1572 1572 Package pdftex.def Info: ./Figures/Architecture AI-VT2.png used on input line
80. 1573 1573 80.
(pdftex.def) Requested size: 427.43153pt x 393.68173pt. 1574 1574 (pdftex.def) Requested size: 427.43153pt x 393.68173pt.
1575 1575
Underfull \vbox (badness 10000) has occurred while \output is active [] 1576 1576 Underfull \vbox (badness 10000) has occurred while \output is active []
1577 1577
[44] 1578 1578 [44]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1579 1579 Underfull \vbox (badness 10000) has occurred while \output is active []
1580 1580
[45 <./Figures/Architecture AI-VT2.png>] 1581 1581 [45 <./Figures/Architecture AI-VT2.png>]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1582 1582 Underfull \vbox (badness 10000) has occurred while \output is active []
1583 1583
[46] 1584 1584 [46]
[47] [48] 1585 1585 [47] [48]
<./Figures/Layers.png, id=1019, 392.46625pt x 216.81pt> 1586 1586 <./Figures/Layers.png, id=1019, 392.46625pt x 216.81pt>
File: ./Figures/Layers.png Graphic file (type png) 1587 1587 File: ./Figures/Layers.png Graphic file (type png)
<use ./Figures/Layers.png> 1588 1588 <use ./Figures/Layers.png>
Package pdftex.def Info: ./Figures/Layers.png used on input line 153. 1589 1589 Package pdftex.def Info: ./Figures/Layers.png used on input line 153.
(pdftex.def) Requested size: 313.9734pt x 173.44823pt. 1590 1590 (pdftex.def) Requested size: 313.9734pt x 173.44823pt.
<./Figures/flow.png, id=1021, 721.69624pt x 593.21625pt> 1591 1591 <./Figures/flow.png, id=1021, 721.69624pt x 593.21625pt>
File: ./Figures/flow.png Graphic file (type png) 1592 1592 File: ./Figures/flow.png Graphic file (type png)
<use ./Figures/flow.png> 1593 1593 <use ./Figures/flow.png>
Package pdftex.def Info: ./Figures/flow.png used on input line 164. 1594 1594 Package pdftex.def Info: ./Figures/flow.png used on input line 164.
(pdftex.def) Requested size: 427.43153pt x 351.33421pt. 1595 1595 (pdftex.def) Requested size: 427.43153pt x 351.33421pt.
) [49 <./Figures/Layers.png>] [50 <./Figures/flow.png>] 1596 1596 ) [49 <./Figures/Layers.png>] [50 <./Figures/flow.png>]
\openout2 = `./chapters/ESCBR.aux'. 1597 1597 \openout2 = `./chapters/ESCBR.aux'.
1598 1598
1599 1599
(./chapters/ESCBR.tex 1600 1600 (./chapters/ESCBR.tex
Chapitre 6. 1601 1601 Chapitre 6.
1602 1602
Underfull \hbox (badness 1383) in paragraph at lines 7--9 1603 1603 Underfull \hbox (badness 1383) in paragraph at lines 7--9
\T1/phv/m/n/10.95 multi-agents cog-ni-tifs im-plé-men-tant un rai-son-ne-ment B 1604 1604 \T1/phv/m/n/10.95 multi-agents cog-ni-tifs im-plé-men-tant un rai-son-ne-ment B
ayé-sien. Cette as-so-cia-tion, 1605 1605 ayé-sien. Cette as-so-cia-tion,
[] 1606 1606 []
1607 1607
1608 1608
Underfull \hbox (badness 10000) in paragraph at lines 7--9 1609 1609 Underfull \hbox (badness 10000) in paragraph at lines 7--9
1610 1610
[] 1611 1611 []
1612 1612
[51 1613 1613 [51
1614 1614
1615 1615
1616 1616
1617 1617
] 1618 1618 ]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1619 1619 Underfull \vbox (badness 10000) has occurred while \output is active []
1620 1620
[52] 1621 1621 [52]
<./Figures/NCBR0.png, id=1064, 623.32875pt x 459.7175pt> 1622 1622 <./Figures/NCBR0.png, id=1064, 623.32875pt x 459.7175pt>
File: ./Figures/NCBR0.png Graphic file (type png) 1623 1623 File: ./Figures/NCBR0.png Graphic file (type png)
<use ./Figures/NCBR0.png> 1624 1624 <use ./Figures/NCBR0.png>
Package pdftex.def Info: ./Figures/NCBR0.png used on input line 34. 1625 1625 Package pdftex.def Info: ./Figures/NCBR0.png used on input line 34.
(pdftex.def) Requested size: 427.43153pt x 315.24129pt. 1626 1626 (pdftex.def) Requested size: 427.43153pt x 315.24129pt.
1627 1627
[53 <./Figures/NCBR0.png>] 1628 1628 [53 <./Figures/NCBR0.png>]
<./Figures/FlowCBR0.png, id=1075, 370.38374pt x 661.47125pt> 1629 1629 <./Figures/FlowCBR0.png, id=1075, 370.38374pt x 661.47125pt>
File: ./Figures/FlowCBR0.png Graphic file (type png) 1630 1630 File: ./Figures/FlowCBR0.png Graphic file (type png)
<use ./Figures/FlowCBR0.png> 1631 1631 <use ./Figures/FlowCBR0.png>
Package pdftex.def Info: ./Figures/FlowCBR0.png used on input line 45. 1632 1632 Package pdftex.def Info: ./Figures/FlowCBR0.png used on input line 45.
(pdftex.def) Requested size: 222.23195pt x 396.8858pt. 1633 1633 (pdftex.def) Requested size: 222.23195pt x 396.8858pt.
[54 <./Figures/FlowCBR0.png>] 1634 1634 [54 <./Figures/FlowCBR0.png>]
<./Figures/Stacking1.png, id=1084, 743.77875pt x 414.54875pt> 1635 1635 <./Figures/Stacking1.png, id=1084, 743.77875pt x 414.54875pt>
File: ./Figures/Stacking1.png Graphic file (type png) 1636 1636 File: ./Figures/Stacking1.png Graphic file (type png)
<use ./Figures/Stacking1.png> 1637 1637 <use ./Figures/Stacking1.png>
Package pdftex.def Info: ./Figures/Stacking1.png used on input line 84. 1638 1638 Package pdftex.def Info: ./Figures/Stacking1.png used on input line 84.
(pdftex.def) Requested size: 427.43153pt x 238.23717pt. 1639 1639 (pdftex.def) Requested size: 427.43153pt x 238.23717pt.
[55] 1640 1640 [55]
<./Figures/SolRep.png, id=1095, 277.035pt x 84.315pt> 1641 1641 <./Figures/SolRep.png, id=1095, 277.035pt x 84.315pt>
File: ./Figures/SolRep.png Graphic file (type png) 1642 1642 File: ./Figures/SolRep.png Graphic file (type png)
<use ./Figures/SolRep.png> 1643 1643 <use ./Figures/SolRep.png>
Package pdftex.def Info: ./Figures/SolRep.png used on input line 98. 1644 1644 Package pdftex.def Info: ./Figures/SolRep.png used on input line 98.
(pdftex.def) Requested size: 277.03432pt x 84.31477pt. 1645 1645 (pdftex.def) Requested size: 277.03432pt x 84.31477pt.
<./Figures/AutomaticS.png, id=1096, 688.5725pt x 548.0475pt> 1646 1646 <./Figures/AutomaticS.png, id=1096, 688.5725pt x 548.0475pt>
File: ./Figures/AutomaticS.png Graphic file (type png) 1647 1647 File: ./Figures/AutomaticS.png Graphic file (type png)
<use ./Figures/AutomaticS.png> 1648 1648 <use ./Figures/AutomaticS.png>
Package pdftex.def Info: ./Figures/AutomaticS.png used on input line 107. 1649 1649 Package pdftex.def Info: ./Figures/AutomaticS.png used on input line 107.
(pdftex.def) Requested size: 427.43153pt x 340.20406pt. 1650 1650 (pdftex.def) Requested size: 427.43153pt x 340.20406pt.
1651 1651
Underfull \vbox (badness 10000) has occurred while \output is active [] 1652 1652 Underfull \vbox (badness 10000) has occurred while \output is active []
1653 1653
[56 <./Figures/Stacking1.png> <./Figures/SolRep.png>] [57 <./Figures/Automatic 1654 1654 [56 <./Figures/Stacking1.png> <./Figures/SolRep.png>] [57 <./Figures/Automatic
S.png>] 1655 1655 S.png>]
[58] 1656 1656 [58]
<./Figures/Stacking2.png, id=1130, 743.77875pt x 414.54875pt> 1657 1657 <./Figures/Stacking2.png, id=1130, 743.77875pt x 414.54875pt>
File: ./Figures/Stacking2.png Graphic file (type png) 1658 1658 File: ./Figures/Stacking2.png Graphic file (type png)
<use ./Figures/Stacking2.png> 1659 1659 <use ./Figures/Stacking2.png>
Package pdftex.def Info: ./Figures/Stacking2.png used on input line 192. 1660 1660 Package pdftex.def Info: ./Figures/Stacking2.png used on input line 192.
(pdftex.def) Requested size: 427.43153pt x 238.23717pt. 1661 1661 (pdftex.def) Requested size: 427.43153pt x 238.23717pt.
1662 1662
Underfull \hbox (badness 10000) in paragraph at lines 203--205 1663 1663 Underfull \hbox (badness 10000) in paragraph at lines 203--205
1664 1664
[] 1665 1665 []
1666 1666
[59 <./Figures/Stacking2.png>] 1667 1667 [59 <./Figures/Stacking2.png>]
<Figures/FW.png, id=1145, 456.70625pt x 342.27875pt> 1668 1668 <Figures/FW.png, id=1145, 456.70625pt x 342.27875pt>
File: Figures/FW.png Graphic file (type png) 1669 1669 File: Figures/FW.png Graphic file (type png)
<use Figures/FW.png> 1670 1670 <use Figures/FW.png>
Package pdftex.def Info: Figures/FW.png used on input line 218. 1671 1671 Package pdftex.def Info: Figures/FW.png used on input line 218.
(pdftex.def) Requested size: 427.43153pt x 320.34758pt. 1672 1672 (pdftex.def) Requested size: 427.43153pt x 320.34758pt.
[60 <./Figures/FW.png>] [61] 1673 1673 [60 <./Figures/FW.png>] [61]
<./Figures/boxplot.png, id=1167, 1994.45125pt x 959.585pt> 1674 1674 <./Figures/boxplot.png, id=1167, 1994.45125pt x 959.585pt>
File: ./Figures/boxplot.png Graphic file (type png) 1675 1675 File: ./Figures/boxplot.png Graphic file (type png)
<use ./Figures/boxplot.png> 1676 1676 <use ./Figures/boxplot.png>
Package pdftex.def Info: ./Figures/boxplot.png used on input line 324. 1677 1677 Package pdftex.def Info: ./Figures/boxplot.png used on input line 324.
(pdftex.def) Requested size: 427.43153pt x 205.64786pt. 1678 1678 (pdftex.def) Requested size: 427.43153pt x 205.64786pt.
[62] 1679 1679 [62]
Underfull \hbox (badness 10000) in paragraph at lines 343--344 1680 1680 Underfull \hbox (badness 10000) in paragraph at lines 343--344
1681 1681
[] 1682 1682 []
1683 1683
1684 1684
Underfull \hbox (badness 2564) in paragraph at lines 345--345 1685 1685 Underfull \hbox (badness 2564) in paragraph at lines 345--345
[][]\T1/phv/m/sc/14.4 ESCBR-SMA : In-tro-duc-tion des sys-tèmes multi- 1686 1686 [][]\T1/phv/m/sc/14.4 ESCBR-SMA : In-tro-duc-tion des sys-tèmes multi-
[] 1687 1687 []
1688 1688
1689 1689
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1690 1690 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1691 1691 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 63 1692 1692 S ESCBR \T1/phv/m/n/10.95 63
[] 1693 1693 []
1694 1694
[63 <./Figures/boxplot.png>] 1695 1695 [63 <./Figures/boxplot.png>]
<Figures/NCBR.png, id=1178, 653.44125pt x 445.665pt> 1696 1696 <Figures/NCBR.png, id=1178, 653.44125pt x 445.665pt>
File: Figures/NCBR.png Graphic file (type png) 1697 1697 File: Figures/NCBR.png Graphic file (type png)
<use Figures/NCBR.png> 1698 1698 <use Figures/NCBR.png>
Package pdftex.def Info: Figures/NCBR.png used on input line 355. 1699 1699 Package pdftex.def Info: Figures/NCBR.png used on input line 355.
(pdftex.def) Requested size: 427.43153pt x 291.5149pt. 1700 1700 (pdftex.def) Requested size: 427.43153pt x 291.5149pt.
[64 <./Figures/NCBR.png>] 1701 1701 [64 <./Figures/NCBR.png>]
<Figures/FlowCBR.png, id=1188, 450.68375pt x 822.07124pt> 1702 1702 <Figures/FlowCBR.png, id=1188, 450.68375pt x 822.07124pt>
File: Figures/FlowCBR.png Graphic file (type png) 1703 1703 File: Figures/FlowCBR.png Graphic file (type png)
<use Figures/FlowCBR.png> 1704 1704 <use Figures/FlowCBR.png>
Package pdftex.def Info: Figures/FlowCBR.png used on input line 384. 1705 1705 Package pdftex.def Info: Figures/FlowCBR.png used on input line 384.
(pdftex.def) Requested size: 270.41232pt x 493.24655pt. 1706 1706 (pdftex.def) Requested size: 270.41232pt x 493.24655pt.
1707 1707
Underfull \hbox (badness 1107) in paragraph at lines 417--418 1708 1708 Underfull \hbox (badness 1107) in paragraph at lines 417--418
[]\T1/phv/m/n/10.95 Cette sec-tion pré-sente de ma-nière plus dé-taillée les co 1709 1709 []\T1/phv/m/n/10.95 Cette sec-tion pré-sente de ma-nière plus dé-taillée les co
m-por-te-ments des agents 1710 1710 m-por-te-ments des agents
[] 1711 1711 []
1712 1712
1713 1713
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1714 1714 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1715 1715 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 65 1716 1716 S ESCBR \T1/phv/m/n/10.95 65
[] 1717 1717 []
1718 1718
[65] 1719 1719 [65]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1720 1720 Underfull \vbox (badness 10000) has occurred while \output is active []
1721 1721
[66 <./Figures/FlowCBR.png>] 1722 1722 [66 <./Figures/FlowCBR.png>]
<Figures/agent.png, id=1204, 352.31625pt x 402.50375pt> 1723 1723 <Figures/agent.png, id=1204, 352.31625pt x 402.50375pt>
File: Figures/agent.png Graphic file (type png) 1724 1724 File: Figures/agent.png Graphic file (type png)
<use Figures/agent.png> 1725 1725 <use Figures/agent.png>
Package pdftex.def Info: Figures/agent.png used on input line 458. 1726 1726 Package pdftex.def Info: Figures/agent.png used on input line 458.
(pdftex.def) Requested size: 246.61969pt x 281.7507pt. 1727 1727 (pdftex.def) Requested size: 246.61969pt x 281.7507pt.
1728 1728
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1729 1729 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1730 1730 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 67 1731 1731 S ESCBR \T1/phv/m/n/10.95 67
[] 1732 1732 []
1733 1733
[67] 1734 1734 [67]
<Figures/BayesianEvolution.png, id=1218, 626.34pt x 402.50375pt> 1735 1735 <Figures/BayesianEvolution.png, id=1218, 626.34pt x 402.50375pt>
File: Figures/BayesianEvolution.png Graphic file (type png) 1736 1736 File: Figures/BayesianEvolution.png Graphic file (type png)
<use Figures/BayesianEvolution.png> 1737 1737 <use Figures/BayesianEvolution.png>
Package pdftex.def Info: Figures/BayesianEvolution.png used on input line 471. 1738 1738 Package pdftex.def Info: Figures/BayesianEvolution.png used on input line 471.
1739 1739
(pdftex.def) Requested size: 313.16922pt x 201.25137pt. 1740 1740 (pdftex.def) Requested size: 313.16922pt x 201.25137pt.
[68 <./Figures/agent.png>] 1741 1741 [68 <./Figures/agent.png>]
Underfull \hbox (badness 10000) in paragraph at lines 512--512 1742 1742 Underfull \hbox (badness 10000) in paragraph at lines 512--512
[]|\T1/phv/m/n/8 Input. 1743 1743 []|\T1/phv/m/n/8 Input.
[] 1744 1744 []
1745 1745
1746 1746
Underfull \hbox (badness 10000) in paragraph at lines 512--513 1747 1747 Underfull \hbox (badness 10000) in paragraph at lines 512--513
[]|\T1/phv/m/n/8 Output 1748 1748 []|\T1/phv/m/n/8 Output
[] 1749 1749 []
1750 1750
<Figures/boxplot2.png, id=1233, 1615.03375pt x 835.12pt> 1751 1751 <Figures/boxplot2.png, id=1233, 1615.03375pt x 835.12pt>
File: Figures/boxplot2.png Graphic file (type png) 1752 1752 File: Figures/boxplot2.png Graphic file (type png)
<use Figures/boxplot2.png> 1753 1753 <use Figures/boxplot2.png>
Package pdftex.def Info: Figures/boxplot2.png used on input line 644. 1754 1754 Package pdftex.def Info: Figures/boxplot2.png used on input line 644.
(pdftex.def) Requested size: 427.43153pt x 221.01265pt. 1755 1755 (pdftex.def) Requested size: 427.43153pt x 221.01265pt.
1756 1756
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1757 1757 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1758 1758 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 69 1759 1759 S ESCBR \T1/phv/m/n/10.95 69
[] 1760 1760 []
1761 1761
[69 <./Figures/BayesianEvolution.png>] 1762 1762 [69 <./Figures/BayesianEvolution.png>]
Underfull \vbox (badness 10000) has occurred while \output is active [] 1763 1763 Underfull \vbox (badness 10000) has occurred while \output is active []
1764 1764
[70] 1765 1765 [70]
1766 1766
LaTeX Warning: Text page 71 contains only floats. 1767 1767 LaTeX Warning: Text page 71 contains only floats.
1768 1768
1769 1769
Overfull \hbox (5.60397pt too wide) has occurred while \output is active 1770 1770 Overfull \hbox (5.60397pt too wide) has occurred while \output is active
\T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN 1771 1771 \T1/phv/m/sl/10.95 6.3. ESCBR-SMA : INTRODUCTION DES SYSTÈMES MULTI-AGENTS DAN
S ESCBR \T1/phv/m/n/10.95 71 1772 1772 S ESCBR \T1/phv/m/n/10.95 71
[] 1773 1773 []
1774 1774
[71 <./Figures/boxplot2.png>]) [72] 1775 1775 [71 <./Figures/boxplot2.png>]) [72]
\openout2 = `./chapters/TS.aux'. 1776 1776 \openout2 = `./chapters/TS.aux'.
1777 1777
(./chapters/TS.tex 1778 1778 (./chapters/TS.tex
Chapitre 7. 1779 1779 Chapitre 7.
1780 1780
Underfull \vbox (badness 10000) has occurred while \output is active [] 1781 1781 Underfull \vbox (badness 10000) has occurred while \output is active []
1782 1782
[73 1783 1783 [73
1784 1784
1785 1785
1786 1786
1787 1787
] 1788 1788 ]
Overfull \hbox (19.02232pt too wide) in paragraph at lines 60--86 1789 1789 Overfull \hbox (19.02232pt too wide) in paragraph at lines 60--86
[][] 1790 1790 [][]
[] 1791 1791 []
1792 1792
[74] 1793 1793 [74]
Package hyperref Info: bookmark level for unknown algorithm defaults to 0 on in 1794 1794 Package hyperref Info: bookmark level for unknown algorithm defaults to 0 on in
put line 124. 1795 1795 put line 124.
[75] 1796 1796 [75]
<./Figures/dataset.png, id=1293, 15.13687pt x 8.08058pt> 1797 1797 <./Figures/dataset.png, id=1293, 15.13687pt x 8.08058pt>
File: ./Figures/dataset.png Graphic file (type png) 1798 1798 File: ./Figures/dataset.png Graphic file (type png)
<use ./Figures/dataset.png> 1799 1799 <use ./Figures/dataset.png>
Package pdftex.def Info: ./Figures/dataset.png used on input line 146. 1800 1800 Package pdftex.def Info: ./Figures/dataset.png used on input line 146.
(pdftex.def) Requested size: 427.43153pt x 228.35583pt. 1801 1801 (pdftex.def) Requested size: 427.43153pt x 228.35583pt.
[76] 1802 1802 [76]
<./Figures/comp2.png, id=1305, 14.98512pt x 7.33133pt> 1803 1803 <./Figures/comp2.png, id=1305, 14.98512pt x 7.33133pt>
File: ./Figures/comp2.png Graphic file (type png) 1804 1804 File: ./Figures/comp2.png Graphic file (type png)
<use ./Figures/comp2.png> 1805 1805 <use ./Figures/comp2.png>
Package pdftex.def Info: ./Figures/comp2.png used on input line 182. 1806 1806 Package pdftex.def Info: ./Figures/comp2.png used on input line 182.
(pdftex.def) Requested size: 427.43153pt x 209.34462pt. 1807 1807 (pdftex.def) Requested size: 427.43153pt x 209.34462pt.

No preview for this file type

main.synctex.gz View file @ 2a133b0

No preview for this file type