Past and Current Research Topics
- Computer Science
- Artificial Intelligence
- Knowledge Representation (explicit or tacit, common sense or specific, categorical theory or taxonomy, lattice theory, graph theory, ontological or epistemic nature, semantic web)
- Expert Systems (Specialized knowledge uses to one Domain)
- Reasoning (Searching (blind or heuristics), logics, constraint satisfaction, Probabilistic theory, Bayes rules, Default reasoning, Abduction)
- Machine Learning
Learning as knowledge acquisition through training based on observations (dataset) without explicit step-by-step instructions or rules (bottom-up approach)
Building a model without explicit representation
Using and automatically updating explicit or tacit knowledge (known as models from learning) for a specific task (classification, clustering, pattern recognition, control or prediction))
- Deep Neural Networks (DNNs) fundamentals (Mathematical foundations) (dealing better with big data and making good generalized models)
- Generalized Adversarial Networks (GANs)
- Transfer learning: Someone who can solve a Rubik's cube 3x3x3 should be able to easily solve a Rubik's cube 2x2x2 by using his learned skills and that is transfer learning. The acquiered knowledge on a subdomain can be generalized or specialized. This idea became popular in deep learning after researchers have successfully applied it in image recognition tasks. Some pretrained DNNs (pretrained DNNs) can be readily reused to a new task of image recognition never learnt before and later fine-tuned to this specific task. In human cognition, transfer learning goes far beyond rather than just image recognition. A musician can play any musical instrument by just learning the mechanic of it and fine-tuning through hearing. A pianist can be a violonist or saxophonist using a transfer learning. This is the basic cognition (horizontal transfer learning) but computer is far from that. The metacognition (vertical transfer learning) is what makes human so powerful unleashing unlimited creativity. This makes exceptional musician to be a composer or a transcriber as a master in the domain.
- Model tuning and realtime updates
- Attention Mechanisms (focus of attention in the DNNs inputs as a neural nets component. This is an update of the sequence to sequence in recurrent neural networks (RNNs) model by adding a context layer evaluating the most important links and nodes. There are also some dropout layer based attentions ). In neural machine learning jargon, this is also called a "Transformer." They are neural layers of encoder/decoder each composed of sublayers self-attention and feed-forward layer with some operations (addition and normalization). This is used in Machine Translation or Natural Language Processing (NLP) such as Question Answering System, Natural Language Understanding or Document Classifications with a specific attention model called BERT (Pre-training of Deep Bidirectional Transformers for Language Understanding: Bidirectional Encoder Representations from Transformers). In recent years, Google DeepMind has used this new learning mechanism to tackle the problem of the protein structure folding (Deep learning 3D structures explanation) and made a breakthrough to solve a dataset problem by 87 percent of accuracy during the last CASP competition in 2020 (Article in Nature). This is an unprecedented event and everyone is waiting the publication of their article describing their program called Alphafold 2.
- Autonomous Machine Learning (This is the idea that we can achieve a learning without supervision or a teacher. We can call it a self-taught machine from scratch. The process is like an unsupervised machine learning one. The computer chess player AlphaZero or MuZero is an example of this kind and is able to play also Shogi, Go or Atari without any instructions but just through playing and adjustment of the DNNs with reward system at each end of the game. However, a skilled programmer is still needed to make it works.)
- Automated Machine Learning (AutoML) is slightly different as the goal is to help users without skills in machine learning for their problem related to machine learning. As Yann Le Cun has said "We don't want to kill millions of passengers before achieving the best autonoumous driving car if you have to build it from scratch by itself without any prior rules embedded." So corporations opted the AutoML version and made the Autonomous Machine Learning for games and researches.
- Agents and multiagents
- Reinforcement learning (reward mechanisms), adaptation, optimization, goal designed and directed agents.
- Coordination mechanisms (hierarchical, central planning, independent concurrent intelligent agent with common sense rules,
homogeneous or heterogeneous agents control, emergence mechanism)
- Cooperation mechanisms (negotiation or competition)
- Art of Programming
- Approaches with Problem Solving (Goal oriented systems with well defined initial states, step-by-step states and actions (state transitions) instructions)
- Specialized programming languages or tools (Python, R, Scientific Libraries, Machine Learning Libraries or API such as Tensorflow, SciKit-Learn or Keras)
- Information Science
- Statistics and Algebras
- Information Retrieval, Information filtering, Recommendation systems
- Natural Language Processing based corpus linguistics and machine readable dictionaries (Wordnet)
- Data science and text mining
- Exploration Data Analysis (EDA)
- Problem Solving based on data: Scientific knowledge discovery. hidden knowledge discovery, knowledge visualization
- News reports summarization, documents topic modeling, social media comments analysis, PCA, auto-encoders, recommendation or filtering based sentiment analysis and important keywords
- Applications
- Autonomous Driving System Architecture
Assistant Driving Agent
- Story Understanding
(Common Sense Reasoning)
- Question Answering System, Social Network Analysis, Information Overload filtering
- Game and Animation
- Finance and investments algorithms
- Forecasting systems (time series prediction, physical system simulation)
Made 31 January 2021
by myself.