Yazar "Abdulwahhab, Ali H." seçeneğine göre listele
Listeleniyor 1 - 8 / 8
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe A review on medical image applications based on deep learning techniques(University of Portsmouth, 2024) Abdulwahhab, Ali H.; Mahmood, Noof T.; Mohammed, Ali Abdulwahhab; Myderrizi, Indrit; Al-Jumaili, Mustafa HamidThe integration of deep learning in medical image analysis is a transformative leap in healthcare, impacting diagnosis and treatment significantly. This scholarly review explores deep learning’s applications, revealing limitations in traditional methods while showcasing its potential. It delves into tasks like segmentation, classification, and enhancement, highlighting the pivotal roles of Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs). Specific applications, like brain tumor segmentation and COVID-19 diagnosis, are deeply analyzed using datasets like NIH Clinical Center’s Chest X-ray dataset and BraTS dataset, proving invaluable for model training. Emphasizing high-quality datasets, especially in chest X-rays and cancer imaging, the article underscores their relevance in diverse medical imaging applications. Additionally, it stresses the managerial implications in healthcare organizations, emphasizing data quality and collaborative partnerships between medical practitioners and data scientists. This review article illuminates deep learning’s expansive potential in medical image analysis, a catalyst for advancing healthcare diagnostics and treatments.Öğe Analysis of potential 5G transmission methods concerning bit error rate(Elsevier GmbH, 2024) Abdulwahhab Mohammed, Ali; Abdulwahhab, Ali H.Fifth-generation wireless (5G) significantly impacts individuals' lives and work and is expected to increase. Although OFDM has been used in previous generation technologies (4G), it has limitations in meeting specific criteria such as data. Issues like better power consumption and Bit Error Rate make this technology unsuitable for meeting current needs such as the Internet of Things and user-based processing. This research aims to test the BER success of the transmission technique proposed as a candidate for the 5G communication system. Performance analysis is carried out considering the Tapped Delay Line (TDL-A) channel model, which is recommended for 5G research. Various scenarios, including situations where the transmitter/receiver is fixed or mobile, are considered to provide a more comprehensive evaluation. Evaluations were also carried out on various channel delay distribution profiles to expand the scope of analysis. The research results show that in channel conditions with high delay distribution and mobility, the communication system that uses the FBMC transmission technique shows better and more stable performance compared to OFMD, F-OFMD, WOLA, UFMC systems, it has an SNR of about 36 dB to achieve a BER of approx 10-3 under 10 ns delay spreading conditions, demonstrating the superiority of FBMC in difficult channel conditions These findings emphasize the importance of considering BER and system complexity in the design of future communications systems. Thus, this research provides a significant original contribution to the understanding and developing 5G communication systems.Öğe BCI-DRONE CONTROL BASED ON THE CONCENTRATION LEVEL AND EYE BLINK SIGNALS USING A NEUROSKY HEADSET(University of Kufa, 2025) Mohammed, Ali Abdulwahhab; Abdulwahhab, Ali H.; Abdulaal, Alaa Hussein; Mahmood, Musaria Karim; Myderrizi, Indrit; Yassin, Riyam Ali; Abdulridha, Taha Talib; Valizadeh, MortezaBrain neurons activate Human movements by producing electrical bio-signals. Neuron activity is used in several technologies by operating their applications based on mind waves. The Brain-Computer Interface (BCI) technology enables a processor to connect with the brain using a signal received from the brain. This study proposes a drone controlled using EEG signals acquired by a Neurosky device based on the BCI system. Two active signals are adapted for controlling the drone motions: concentration brain signals portrayed by attention level and the eye blinks as an integer value. A dynamic classification method is implemented via a Linear Regression algorithm for attention-level code. The eye blinking generates a binary code to control the drone's motions. The accuracy of this code is improved through Artificial Neural Networks and Machine Learning techniques. These codes (attention level and eye blink codes) drive two controlling layers and manipulate nine possible drone movements. The experiment was evaluated with several users and showed high performance for the classification methods and developed algorithm. The experiment shows a 90.37% accuracy control that outperforms most existing experiments. Also, the experiment can support 16 commands, making the algorithm appropriate for various applications.Öğe Detection Lung Nodules Using Medical CT Images Based on Deep Learning Techniques(2025) Mohammed, Ali Abdulwahhab; Abdulwahhab, Ali H.; Ibraheem, Ibraheem KasimLung nodule cancer detection is a critical and complex medical challenge. Accuracy in detecting lung nodules can significantly improve patient prognosis and care. The main challenge is to develop a detection method that can accurately distinguish between benign and malignant nodules and perform effectively under various imaging conditions. The development of technology and investment in deep learning techniques in the medical field make it easy to use Positron Emission Tomography (PET) and Computed Tomography (CT). Thus, this paper presents lung cancer detection by filtering the PET-CT image, obtaining the lung region of interest (ROI), and training using Convolution neural network (CNN)-Deep learning models for defending the nodules' location. The limitation dataset composed of 220 cases with 560 nodules with fixed Hounsfield Units (HU) is used to increase the training's speed and save data. The trained models involve CNN, DCNN, 3DCNN, VGG 19, ResNet 18, Inception V1, and Inception-ResNet to detect the lung nodules. The experiment shows high-speed training with VGG 19 outperforming the rest of deep learning, it achieves accuracy, Precision, Specificity, Sensitivity, F1-Score, IoU, FP rate with standard division; 98.65 f 0.22, 98.80 f 0.15, 98.70 f 0.20, 98.55 f 0.18, 98.60 f 0.16, 0.94 f 0.03, 1.05 f 0.22, respectively. Moreover, the experiment results show an overall error rate and a standard division between f 0.04 to f 0.54 distributed over the calculation terms.Öğe Detection of epileptic seizure using EEG signals analysis based on deep learning techniques(Elsevier, 2024) Abdulwahhab, Ali H.; Abdulaal, Alaa Hussein; Thary Al-Ghrairi, Assad H.; Mohammed, Ali Abdulwahhab; Valizadeh, MortezaThe brain neurons' electrical activities represented by Electroencephalogram (EEG) signals are the most common data for diagnosing Epilepsy seizure, which is considered a chronic nervous disorder that cannot be controlled medically using surgical operation or medications with more than 40 % of Epilepsy seizure case. With the progress and development of artificial intelligence and deep learning techniques, it becomes possible to detect these seizures over the observation of the non-stationary-dynamic EEG signals, which contain important information about the mental state of patients. This paper provides a concerted deep machine learning model consisting of two simultaneous techniques detecting the activity of epileptic seizures using EEG signals. The time-frequency image of EEG waves and EEG raw waves are used as input components for the convolution neural network (CNN) and recurrent neural network (RNN) with long- and short-term memory (LSTM). Two processing signal methods have been used, Short-Time Fourier Transform (STFT) and Continuous Wavelet Transformation (CWT), have been used for generating spectrogram and scalogram images with sizes of 77 × 75 and 32 × 32, respectively. The experimental results showed a detection accuracy of 99.57 %, 99.57 % using CWT Scalograms, and 99.26 %, 97.12 % using STFT spectrograms as CNN input for the Bonn University dataset and the CHB-MIT dataset, respectively. Thus, the proposed models provide the ability to detect epileptic seizures with high success compared to previous studies.Öğe HAFMAB-Net: hierarchical adaptive fusion based on multilevel attention-enhanced bottleneck neural network for breast histopathological cancer classification(SPRINGER LONDON, 2025) Abdulwahhab, Ali H.; Bayat, Oğuz; Ibrahim, Abdullahi AbduHistological images play a crucial role in diagnosing diseases, especially breast cancer, which remains a major health concern for women worldwide. Computer-aided diagnosis tools significantly assist physicians in early detection and treatment planning, helping reduce mortality rates. Convolutional neural networks (CNNs) based on deep learning have proven effective in distinguishing benign from malignant breast cancers. In this context, HAFMAB-Net: Hierarchical Adaptive Fusion based on Multilevel Attention-Enhanced Bottleneck Neural Network, is proposed. The network comprises two pathways utilizing an enhanced Bottleneck architecture with attention mechanisms to extract both global and spatial features. It incorporates a Deeper Spatial Attention Aggregator Module to boost the representation of locative features by focusing on key spatial regions, improving the discriminative power of aggregated features. Additionally, a modified Adaptive Fusion Module combines the enhanced global and boosted spatial features into a comprehensive and enriched feature representation, which is subsequently used for classification. The proposed HAFMAB-Net was evaluated on the BACH dataset and further tested on the BreaKHis and LC25000 datasets to validate its robustness. The model achieved 99% accuracy on the BACH dataset, 98.99% accuracy on BreaKHis, 100% accuracy on each Colon, Lung, LC25000 datasets, respectively. These results highlight the HAFMAB-Net's efficiency, accuracy, and effectiveness in both multi-class and binary classification tasks, demonstrating its potential for broader applications in medical image analysis.Öğe PAFWF-EEGC Net: parallel adaptive feature weight fusion based on EEG-dynamic characteristics using channels neural network for driver drowsiness detection(Springer Science and Business Media Deutschland GmbH, 2025) Abdulwahhab, Ali H.; Myderrizi, Indrit; Yurdakul, Muhammet MustafaDrowsy driving is considered one of the most dangerous causes of road accidents and deaths worldwide. Drivers’ concentration is directly affected by fatigue, which affects their reaction time, reducing their attention and decision-making ability on the road. This can often lead to dangerous situations. With the development of Human Computer Interface systems and the rise of intelligent transportation systems, examining the effects of driver fatigue has become more critical, and research aimed at reducing the risk of fatigue-related accidents has gained importance. For this purpose, this study proposes a Parallel Adaptive Feature Weight Fusion based on EEG-Dynamic Characteristics using Channels Neural Network (PAFWF-EEGC Net) to detect the driver drowsiness condition. Two signal processing techniques are used to extract EEG dynamic features: first, Continuous Wavelet Transform (CWT) to capture the spectral-temporal features by accurately estimating both time and frequency localizations, and second, Fast Fourier Transform (FFT)—Power Spectrum Density (PSD) to convert the signals from the time domain to the frequency domain and show the distribution of signal power over frequency. These extracted dynamic features are passed to Attention channels and Parallel Adaptive Feature Fusion to integrate the most relevant feature channels to detect mental state. Furthermore, three processing dataset scenarios and cross-validation techniques are used to validate the Net. The Net showed excellent performance through ninefold/3rd scenario by achieving 98% detection accuracy, and 84%, 88.75%, 93.8% average detection accuracy through 1st, 2nd, 3rd scenarios respectively.Öğe Unsupervised histopathological sub-image analysis for breast cancer diagnosis using variational autoencoders, clustering, and supervised learning(Mustansiriyah University College of Engineering, 2024) Abdulaal, Alaa Hussein; Valizadeh, Morteza; Yassin, Riyam Ali; Albaker, Baraa M.; Abdulwahhab, Ali H.; Amirani, Mehdi Chehel; Shah, A. F. M. ShahenThis paper presents an integrated approach to breast cancer diagnosis that combines unsupervised and supervised learning techniques. The method involves using a pre-trained VGG19 model to process sub-images from the BreaKHis dataset, divided into nine parts for comprehensive analysis. This will be followed by a complete description of the architecture and workings of the variational Autoencoder (VAE) used for unsupervised Learning. The encoder network maps the input features to lower dimensions, capturing the most essential information. VAE learns a compressed representation of sub-images, facilitating a more profound understanding of underlying patterns and structures. For this reason, we then employ k-means clustering on the encoded representation to find naturally occurring clusters in our data set comprising a histopathological image. Every single sub-image is later fed into the VGG19-SVM model for classification purposes. During magnification at 100x, this model has attained a fantastic accuracy rate of 98.56%. Combining unsupervised analysis with VAE/k-means clustering and supervised classification with VGG19/SVM can integrate information from both methods, thereby improving the accuracy and robustness of such a task as sub-image classification in breast cancer histopathology.