Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (2024)

Yunwang Chen†,1, Yaozhong Kang†,2, Ziqi Zhao1, Yue Hong1, Lingxiao Meng1, and Max Q.-H. Meng∗,1All authors are with the Shenzhen Key Laboratory of Robotics Perception and Intelligence, Southern University of Science and Technology, Shenzhen 518055, China.1Yunwang Chen, Ziqi Zhao, Yue Hong, Lingxiao Meng and Max Q.-H. Meng are also with the Department of Electrical and Electronic Engineering, SUSTech, Shenzhen, China. Emails: {chenyw2021, zhaozq2020, 12331232, menglx2021}@mail.sustech.edu.cn and max.meng@ieee.org.2Yaozhong Kang is also with the School of System Design and Intelligent Manufacturing, SUSTech, Shenzhen, China. Email: kangyz2021@mail.sustech.edu.cn.Corresponding author: Max Q.-H. Meng.The first two authors contributed equally to this work.

Abstract

This paper presents a collaborative fall detection and response system integrating Wi-Fi sensing with robotic assistance. The proposed system leverages channel state information (CSI) disruptions caused by movements to detect falls in non-line-of-sight (NLOS) scenarios, offering non-intrusive monitoring. Besides, a companion robot is utilized to provide assistance capabilities to navigate and respond to incidents autonomously, improving efficiency in providing assistance in various environments. The experimental results demonstrate the effectiveness of the proposed system in detecting falls and responding effectively.

I Introduction

As people age, they often experience various issues such as mobility decline, cognitive impairment, and physical health deterioration. For the elderly, falls are particularly detrimental, often resulting in long-term health complications and a diminished quality of life. Indoor falls are especially problematic due to the potential lack of immediate assistance, leading to prolonged periods before help arrives, exacerbating injuries and complicating recovery. Consequently, they must rely on the assistance of family members and caregivers, which creates a significant burden on their families [1].

Providing efficient, cost-effective non-line-of-sight (NLOS) home healthcare for this growing group of older adults has a profound societal impact, in which accurate and prompt indoor fall detection is crucial [2]. For caregivers, it is essential to know if an elder has fallen behind an obstacle, such as a closed door . Traditional methods primarily utilize wearable devices equipped with accelerometers and gyroscopes to monitor sudden changes in motion and orientation. These sensors effectively detect rapid movements indicative of falls, providing real-time alerts to caregivers or emergency services [3]. However, these devices often face user compliance issues; elderly individuals may forget to wear them or find them uncomfortable, leading to inconsistent usage and unreliable monitoring. Vision-based systems, which need to be installed indoors, employ cameras and image processing to detect falls by analyzing visual data [4]. Despite their accuracy, these systems raise privacy concerns due to constant surveillance in private spaces, posing a critical barrier to their widespread adoption .Recent advancements in Wi-Fi sensing have demonstrated potential for human activity recognition by analyzing disruptions in Wi-Fi signals caused by movements [5]. Wi-Fi sensing allows widely used commercial Wi-Fi access points (APs) installed outside obstacles to detect falls inside using the channel state information (CSI) between the Wi-Fi AP and other Wi-Fi devices, effectively addressing the need for NLOS detection. This non-intrusive approach can detect falls without compromising privacy, making it a promising solution for sensitive areas such as bathrooms.

To quickly reach the patient and provide emergency treatment, mobile manipulator systems have emerged as a promising solution for providing timely assistance to the elderly [6]. Typical mobile manipulators are equipped with navigation and manipulation capabilities, enabling them to autonomously perform a variety of tasks that support independent living. By integrating these robotic advancements with Wi-Fi sensing technology, we can improve the efficiency and quality of emergency responses.

In this paper, we propose a novel fall detection and response system that integrates Wi-Fi sensing with a patrolling robot capable of providing assistance. The Wi-Fi sensing device detects falls by analyzing CSI disruptions in Wi-Fi signals caused by human movements. Specifically, we utilize the amplitude of Wi-Fi CSI provided by commodity Wi-Fi devices along with deep learning models to detect falls. The patrolling robot can autonomously respond to fall incidents, navigate indoor environments including door traversal and offer assistance to the fallen person. The test experiment shows that this integrated approach is capable of providing continuous, privacy-preserving monitoring and immediate, autonomous assistance.

The rest of the paper is organized as follows: Section II reviews related work on Wi-Fi sensing and mobile companion robots. Section III describes the methods and system design, including the Wi-Fi sensing module and the companion robot. Section IV details the real-world experiments conducted to evaluate the system, including the experimental setup and results. Finally, Section V concludes the paper and discusses future work.

II Related Work

II-A Wi-Fi Sensing using CSI

Wi-Fi devices based on the IEEE 802.11a/g/n/ac standards employ orthogonal frequency division multiplexing (OFDM) as the modulation scheme, featuring multiple sub-carriers in a Wi-Fi channel and multiple antennas to mitigate frequency-selective fading. The receiver measures a discrete channel frequency response (CFR) over time and frequency as phase and amplitude, encapsulated in the form of CSI for each antenna pair.In wireless communication, CSI is know as the channel property of a wireless communication channel. CSI captures the effects of various phenomena, including reflection, scattering, and fading, which occur as a Wi-Fi signal propagates through an environment. By measuring the amplitude and phase of the signal at different subcarriers between the transmitter and the receiver, CSI provides a comprehensive description of how the signal is modified. This information can be utilized to infer the presence and movements of objects within the environment [5]. Mathematically, the relationship between the transmitted signal 𝐗𝐗\mathbf{X}bold_X and the received signal 𝐘𝐘\mathbf{Y}bold_Y is represented as:

𝐘=𝐇𝐗+𝐍𝐘𝐇𝐗𝐍\mathbf{Y}=\mathbf{H}\mathbf{X}+\mathbf{N}bold_Y = bold_HX + bold_N(1)

where 𝐘Nr×δt𝐘superscriptsubscript𝑁𝑟𝛿𝑡\mathbf{Y}\in\mathbb{C}^{N_{r}\times\delta t}bold_Y ∈ blackboard_C start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT × italic_δ italic_t end_POSTSUPERSCRIPT is the received signal matrix, 𝐇Nr×Nt𝐇superscriptsubscript𝑁𝑟subscript𝑁𝑡\mathbf{H}\in\mathbb{C}^{N_{r}\times N_{t}}bold_H ∈ blackboard_C start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT × italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT is the CSI matrix representing the channel effects, 𝐗Nt×δt𝐗superscriptsubscript𝑁𝑡𝛿𝑡\mathbf{X}\in\mathbb{C}^{N_{t}\times\delta t}bold_X ∈ blackboard_C start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT × italic_δ italic_t end_POSTSUPERSCRIPT is the transmitted signal matrix, and 𝐍Nr×δt𝐍superscriptsubscript𝑁𝑟𝛿𝑡\mathbf{N}\in\mathbb{C}^{N_{r}\times\delta t}bold_N ∈ blackboard_C start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT × italic_δ italic_t end_POSTSUPERSCRIPT denotes the noise matrix. Here, Nrsubscript𝑁𝑟N_{r}italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT represents the number of receiving antennas, Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT represents the number of transmitting antennas, and δt𝛿𝑡\delta titalic_δ italic_t is the length of the communication frame.

Recent advancements in Wi-Fi sensing have employed deep learning techniques to process CSI data for human activity recognition (HAR). El Zein et al. achieved over 90% accuracy using deep CNNs and time series data augmentation for real-time HAR [7]. Su et al. employed multilayer bidirectional LSTM networks with self-powered sensors, achieving over 96% accuracy [8]. Mekruksavanich and Jitpattanakul’s CSI-ResNeXt network achieved 99.17% accuracy with lightweight deep residual networks [9]. However, most of these works lack real-world experiments for testing and validation datasets and also lack deployment in real-world NLOS scenarios.

II-B Mobile Manipulator System

Several studies have highlighted the effectiveness of mobile service robots equipped with navigation and manipulation capabilities in providing timely assistance to the elderly [6].For single tasks such as door traversal, recent works [10] [11] [12] have proposed solutions using either pre-planned methods or sensing-based approaches to handle obstacles in the way. For long-horizon tasks, the Mobile ALOHA project has introduced a low-cost mobile manipulation system capable of performing complex, through whole-body teleoperation and imitation learning [13]. Furthermore, the multi-skill mobile manipulation approach proposed by [14], integrates mobility with manipulation skills and introduces a region-goal navigation reward, demonstrating superior performance in long-horizon mobile manipulation tasks.

These studies highlight the potential for effectively managing tasks related to elderly care. By integrating Wi-Fi sensing as an alert, a mobile companion robot system can improve emergency response efficiency and overall care quality for older adults.

III Methods

Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (1)

The proposed system consists of two main components: the Wi-Fi sensing module and the mobile companion robot, as shown in Fig. 1. The Wi-Fi sensing module detects falls by analyzing signal disruptions caused by movements. The mobile companion robot, equipped with navigation and manipulation capabilities, responds to detected falls by providing assistance and contacting caregivers if necessary.

III-A Wi-Fi Sensing Component

Wi-Fi sensing leverages CSI to identify anomalies in Wi-Fi signals, which can indicate events such as falls. We adopt the two-stream convolution augmented transformer model as our base model, as proposed by [15]. This model captures intricate patterns and dependencies in the data that traditional machine learning methods may overlook, in which LSTM networks are adept at modeling temporal sequences in CSI data, while CNNs excel in identifying spatial features across CSI measurements. The base model integrates both CNN and LSTM with transformer multihead strategy, significantly enhancing the accuracy and robustness of activity recognition systems, making them more feasible for real-world applications.

During the training process, we collected real CSI data corresponding to three distinct states: ’Fall’, ’Normal’, and ’No-person/Static’ in our NLOS experiment setup, as detailed in section IV-A. To enhance the robustness and accuracy of our model, we employed transfer learning techniques. Specifically, we pre-trained the transformer model using the dataset from [5] and then fine-tuned it by modifying the original final layer with new data collected, as detailed in section IV-B.

III-B Mobile Companion Robot

III-B1 Hardware Design

Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (2)

Our mobile robot features a differential drive base with two power wheels, dual robotic arms, an RGB-D camera, and a laser 2. The dual-arm mobile features two 7-DOF simulation arms, giving it ability to handle complicated missions such as freeing itself from being obstructed or performing first aid tasks. The total reachable workspace and the shared workspace of the dual arms are identified in [16].

III-B2 Software Design

Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (3)

The robot utilizes various sensors to collect data for its operation. Alarm data, containing the status of the subject elderly, is obtained through Wi-Fi sensing. Joint states are obtained from arm encoders to provide precise absolute angles of the joints in order to obtain the pose of the end-effecter through forward kinematics. The RGB-D camera captures image data for object recognition and provide video information for remote assessing. IMU (inertial measurement unit) data including gyroscope and accelerometer readings, obtained from the RGB-D camera’s IMU module, aids in the pose estimation of the body. Additionally, the laser scanner (LiDAR) offers detailed distance measurements to create a map of the surroundings using gmapping, and relocalize itself in a pre-built map using AMCL [17].In addition, the control system is divided into two main controllers: the arm controller and the base controller. The arm controller executes the planned trajectories for the robotic arms, while the base controller manages the navigation and movement of the robot chassis.

The data collected from these sensors undergoes several processing steps to ensure accurate localization and planning. AprilTag pose estimation is performed on the RGB image data to enhance visual localization of the precise 3D pose of the door and therefore localize the door handle. The odometry of the robot’s body is calculated using VINS-Fusion [18], which combines visual and inertial data to provide accurate state estimations under scenarios that will degrade common lidar-wheeled fusion estimate algorithms.

For inverse kinematics and arm control, the system integrates multiple planning modules. The end-effector planner and controller utilize OMPL [19] to generate feasible trajectories for the robotic arms. Global path planning for chassis is handled by the A-star algorithm, ensuring an optimal route through the environment. Under the differential wheel structure, the teb-local-planner exhibits excellent navigation performance [20], which dynamically adjusts the path based on real-time obstacles and robot constraints.

IV Experiments

IV-A Experimental Setup

Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (4)
Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (5)

Accessing CSI is not supported by all commercially available IEEE 802.11 chipsets. To overcome this limitation, we utilize the Intel 5300 network interface card (NIC), which is well-regarded for its ability to extract CSI from the physical layer with the assistance of the Linux 802.11n CSI Tool [21]. Our setup includes a TP-Link AX6000 (TL-XDR6020) Wi-Fi router and a Lenovo AX 201 laptop equipped with an Intel 5300 NIC for CSI data retrieval. The laptop can be regarded as a Wi-Fi AP or a Wi-Fi amplifier. Both the router and the laptop are positioned outside a conference room as illustrated in Fig. 4.Fig. 4 also shows the experiment setup, including various designated areas such as the patrolling area, fall sensitive area, walking sensitive area.

Additionally, a Macbook with an M1 Pro chip is used as a server to process Wi-Fi CSI data and a companion robot is patrolling waiting for fall signal. The robot uses a VLP-16 as laser input, Intel Realsense D435-i as RGB image and IMU data input, and two Kinova Gen2 robotic arms, as shown in Fig. 2. Furthermore, Fig. 5 presents the floor map generated by SLAM.

IV-B Experiment and Results

In the base model training, we used the dataset from [5], where the Wi-Fi transmitter and receiver were located 3 meters apart in a line-of-sight (LOS) condition and included 7 categories of action (total of 557 sets), as shown in Table II. Although this dataset is not perfectly aligned with our environment setup, we used it to train the base model because it helps the model learn the features of each action and eases further training. The base dataset was divided into a training dataset and a test dataset with a ratio of 8:2, and fed into the base model. We saved the model at epoch 34, as it attained the highest test accuracy of 90.1%, and used it as the pretrained model for transfer learning.

To adapt the model to our specific objectives and given our limitations in collecting the dataset, we collected a total of 135 sets of experimental data with 3 categories, as shown in Table II, with walking and fall areas detailed in Fig. 4. In the transfer learning phase, the original dense layer with seven classifiers from the pretrained model was replaced with a new linear layer with a softmax output to train the model as a three-classifier suitable for our objective. We divided the collected dataset into a ratio of 8:2 for training and testing, and used the model at epoch 50 for our real demo, achieving the highest test accuracy of 96.3%.

ClassSize
Bed79
Fall79
Pick-up80
Run80
Sit-down80
Stand-up79
Walk80
ClassSize
Fall40
Normal47
No-person48

The experimental results are illustrated in Fig. 6, comparing the training accuracy, test accuracy, and average loss of the base model and the transfer learning model.

Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (6)

BedFallPick-upRunSit-downStand-upWalk
Bed0.820.060.000.000.060.060.00
Fall0.050.950.000.000.000.000.00
Pick-up0.000.000.940.000.060.000.00
Run0.000.000.000.830.000.170.00
Sit-down0.000.000.000.001.000.000.00
Stand-up0.000.000.000.000.050.950.00
Walk0.000.080.000.080.000.080.77

FallWalkingNo-person/static
Fall0.900.100.00
Walking0.001.000.00
No-person/static0.000.001.00

The transfer learning model exhibits improvements over the base model in terms of training speed, average loss and test accuracy. Fig. 6 shows that the transfer learning model achieves higher accuracy faster and maintains a lower loss throughout the training process. The confusion matrices over the test sets can be found in Tables III and IV.

To validate the response effectiveness of the mobile companion robot, we conducted a fall-rescue experiment. In this experiment, a participant performed normal activities and simulated a fall in Room 424. Meanwhile, a robot in Room 422 was carrying out its regular tasks, patrolling between two points. Upon receiving a fall alarm from the server, the robot proceeded to Room 424, removing any obstacles in its path (unlocking, pushing, and traversing a closed door), and carried out the rescue operation. The time from fall detection to the robot’s arrival at the rescue location was within three minutes. The experiment included eight trials, of which seven successfully detected and responded to the fall, while one was misidentified as walking, resulting in an overall success rate of 87%.

Overall, the integrated system demonstrates the potential of utilizing Wi-Fi and mobile robots for fall detection and response, validating the effectiveness of the proposed approach.

V Conclusion

The integration of Wi-Fi sensing with robotic assistance offers a promising solution for fall detection and response. The proposed system provides non-intrusive monitoring, timely detection, and assistance, reducing the risk of long-term injuries. Future work will focus on expanding the system’s capabilities and improving the accuracy of detection.

Appendix

The test demo video is available at[link]. The collected dataset for transfer learning and the trained model can be accessed at [link].

Acknowledgment

This work is partially supported by Shenzhen Key Laboratory of Robotics Perception and Intelligence (ZDSYS20200810171800001), Shenzhen Science and Technology Program under Grant RCBS 20221008093305007, 20231115141459001, Young Elite Scientists Sponsorship Program by CAST under Grant 2023QNRC001, High level of special funds (G03034K003) from Southern University of Science and Technology, Shenzhen, China.

References

  • [1]V.Gallistl and R.von Laufenberg, “Caring for data in later life – the datafication of ageing as a matter of care,” Information, Communication & Society, vol.27, no.4, pp. 774–789, 2024.
  • [2]D.Chen, A.B. Wong, and K.Wu, “Fall detection based on fusion of passive and active acoustic sensing,” IEEE Internet of Things Journal, vol.11, no.7, pp. 11 566–11 578, 2024.
  • [3]X.Yu, S.Park, D.Kim, E.Kim, J.Kim, W.Kim, Y.An, and S.Xiong, “A practical wearable fall detection system based on tiny convolutional neural networks,” Biomedical Signal Processing and Control, vol.86, p. 105325, 2023.
  • [4]A.Hussain, S.U. Khan, I.Rida, N.Khan, and S.W. Baik, “Human centric attention with deep multiscale feature fusion framework for activity recognition in internet of medical things,” Information Fusion, vol. 106, p. 102211, 2024.
  • [5]S.Yousefi, V.Narayanan, A.Saneei, A.Nur Zincir-Heywood, and M.St-Hilaire, “A survey on behavior recognition using wifi channel state information,” IEEE Communications Magazine, vol.55, no.10, pp. 98–104, 2017.
  • [6]G.Bardaro, A.Antonini, and E.Motta, “Robots for elderly care in the home: A landscape analysis and co-design toolkit,” International Journal of Social Robotics, vol.14, no.3, pp. 657–681, 2022.
  • [7]H.ElZein, F.Mourad-Chehade, and H.Amoud, “Intelligent real-time human activity recognition using wi-fi signals,” in 2023 International Conference on Control, Automation and Diagnosis (ICCAD), 2023, pp. 1–5.
  • [8]J.Su, Z.Liao, Z.Sheng, A.Liu, D.Singh, and H.-N. Lee, “Human activity recognition using self-powered sensors based on multilayer bidirectional long short-term memory networks,” IEEE Sensors Journal, 2022.
  • [9]S.Mekruksavanich and A.Jitpattanakul, “A lightweight deep residual network for recognizing activities in daily living using channel state information,” 2023 IEEE 14th International Conference on Software Engineering and Service Science (ICSESS), pp. 171–174, 2023.
  • [10]M.Stuede, K.Nuelle, S.Tappe, and T.Ortmaier, “Door opening and traversal with an industrial cartesian impedance controlled mobile robot,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 966–972.
  • [11]K.Jang, S.Kim, and J.Park, “Motion planning of mobile manipulator for navigation including door traversal,” IEEE Robotics and Automation Letters, vol.8, no.7, pp. 4147–4154, 2023.
  • [12]M.Arduengo, C.Torras, and L.Sentis, “Robust and adaptive door operation with a mobile robot,” Intelligent Service Robotics, vol.14, no.3, pp. 409–425, 2021.
  • [13]Q.Fu, M.Bajracharya, J.Borders, D.Helmick, T.Kollar, M.Laskey, J.Leichty, J.Ma, U.Nagarajan, A.Ochiai, J.Petersen, K.Shankar, K.Stone, and Y.Takaoka, “Mobile aloha: A low-cost mobile manipulation system for data-efficient learning of complex tasks,” in Proceedings of the 2024 IEEE International Conference on Robotics and Automation (ICRA).IEEE, 2024.
  • [14]J.Gu, D.S. Chaplot, H.Su, and J.Malik, “Multi-skill mobile manipulation for object rearrangement,” arXiv preprint arXiv:2209.02778, 2022.
  • [15]B.Li, W.Cui, W.Wang, L.Zhang, Z.Chen, and M.Wu, “Two-stream convolution augmented transformer for human activity recognition,” Proceedings of the AAAI Conference on Artificial Intelligence, vol.35, no.1, pp. 286–293, May 2021.
  • [16]Y.Meng, Z.Zhao, W.Chen, X.Xiao, and M.Q.-H. Meng, “Workspace analysis of a dual-arm mobile robot system for coordinated operation,” in 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2021, pp. 1058–1063.
  • [17]G.Grisetti, C.Stachniss, and W.Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Transactions on Robotics, vol.23, no.1, pp. 34–46, 2007.
  • [18]T.Qin, P.Li, and S.Shen, “Vins-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Transactions on Robotics, vol.34, no.4, pp. 1004–1020, 2018.
  • [19]M.Moll, I.A. Şucan, and L.E. Kavraki, “Benchmarking motion planning algorithms: An extensible infrastructure for analysis and visualization,” IEEE Robotics & Automation Magazine, vol.22, no.3, pp. 96–102, September 2015.
  • [20]C.Rösmann, F.Hoffmann, and T.Bertram, “Integrated online trajectory planning and optimization in distinctive topologies,” Robotics and Autonomous Systems, vol.88, pp. 142–153, 2017.
  • [21]D.Halperin, W.Hu, A.Sheth, and D.Wetherall, “Tool release: Gathering 802.11n traces with channel state information,” ACM SIGCOMM CCR, vol.41, no.1, p.53, Jan. 2011.
Collaborative Fall Detection and Response using Wi-Fi Sensing and Mobile Companion Robot (2024)

References

Top Articles
Rexwinkel-Carlsen Funeral Home | Le Mars, Iowa
Winter 2015 ***WALKTHROUGH*** (Takedown is Live) | EA Forums - 3247091
Navin Dimond Net Worth
Hotels Near Okun Fieldhouse Shawnee Ks
Saccone Joly Gossip
Peralta's Mexican Restaurant Grand Saline Menu
Haunted Mansion Showtimes Near Amc Classic Marion 12
Michigan Lottery Predictions For Today
Main Moon Ashland Ohio Menu
Scriblr Apa
Cincinnati Adult Search
How to Perform Subdomain Enumeration: Top 10 Tools
manhattan cars & trucks - by owner - craigslist
Matka 786 Guessing
’Vought Rising’: What To Know About The Boys Prequel, A Season 5 Link
Finger Lakes 1 Police Beat
Body Rub Phoenix
Greene County sheriff sues state auditor for not releasing whistleblower complaints
Monster From Sherpa Folklore Crossword
Oklahoma City Municipal Courthouse
Townsend Funeral Home Dublin Ga Obituaries
Pier One Chairs
Sas Majors
Tamilyogi. Vip
Covenant Funeral Service Stafford Obituaries
6 Fun Things to Do in Bodega Bay - Sonoma County Tourism
Arch Aplin Iii Felony
2012 Buick Lacrosse Serpentine Belt Diagram
David Knowles, journalist who helped make the Telegraph podcast Ukraine: The Latest a runaway success
Aunt Nettes Menu
6 Beste EN Nuud Kortingscode | Tot 55% korting | September 2024
Gunsmoke Tv Series Wiki
Verde News Cottonwood Az
Coors Field Seats In The Shade
Mireya Arboleda Net Worth 2024| Rachelparris.com
Rugrats in Paris: The Movie | Rotten Tomatoes
Craigslist Cars Los Angeles
Alineaciones De Rcd Espanyol Contra Celta De Vigo
Basis Independent Brooklyn
Herbalism Guide Tbc
Vhl Spanish 2 Answer Key
5417873087
Bryant Air Conditioner Parts Diagram
Parx Entries For Today
Lavender Dreams Nails Walnut Creek Photos
Obituary Sidney Loving
WHAT WE HAVE | Arizona Tile
Criagslist Orlando
David Knowles, journalist who helped make the Telegraph podcast Ukraine: The Latest a runaway success
Barber Gym Quantico Hours
Grand Rapids, Michigan Aviation Weather Report and Forecast
Latest Posts
Article information

Author: Jamar Nader

Last Updated:

Views: 5972

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.