Yazar "Bayraktar, Ertuğrul" seçeneğine göre listele
Listeleniyor 1 - 2 / 2
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Low-cost variable stiffness joint design using translational variable radius pulleys(Pergamon-Elsevier Science Ltd, 2018) Yiğit, Cihat Bora; Bayraktar, Ertuğrul; Boyraz, PınarRobot joints are expected to be safe, compliant, compact, simple and low-cost. Gravity compensation, zero backlash, energy efficiency and stiffness adjustability are some desired features in the robotic joints. The variable radius pulleys (VRPs) provide a simple, compact and low-cost solution to the stiffness adjustment problem. VRP mechanisms maintain a preconfigured nonlinear force-elongation curve utilizing off-the-shelf torsional spring and pulley profile. In this paper, three synthesis algorithms are presented for VRP mechanisms to obtain desired force-elongation curve. In addition, a feasibility condition is proposed to determine the torsional spring coefficient. Using the synthesis methods and the feasibility condition, a variable stiffness mechanism is designed and manufactured which uses two VRPs in an antagonistic cable driven structure. Afterwards, the outputs of three synthesis methods are compared to force-elongation characteristics in the tensile testing experiment. A custom testbed is manufactured to measure the pulley rotation, cable elongation and tensile force at the same time. Using the experiment as the baseline, the best algorithm achieved to reproduce the desired curve with a root-mean-square (RMS) error of 13.3%. Furthermore, VRP-VSJ is implemented with a linear controller to reveal the performance of the mechanism in terms of position accuracy and stiffness adjustability. (C) 2018 Elsevier Ltd. All rights reserved.Öğe Object manipulation with a variable-stiffness robotic mechanism using deep neural networks for visual semantics and load estimation(Springer London, 2019) Bayraktar, Ertuğrul; Yiğit, Cihat Bora; Boyraz, PınarIn recent years, the computer vision applications in the robotics have been improved to approach human-like visual perception and scene/context understanding. Following this aspiration, in this study, we explored the possibility of better object manipulation performance by connecting the visual recognition of objects to their physical attributes, such as weight and center of gravity (CoG). To develop and test this idea, an object manipulation platform is built comprising a robotic arm, a depth camera fixed at the top center of the workspace, embedded encoders in the robotic arm mechanism, and microcontrollers for position and force control. Since both the visual recognition and force estimation algorithms use deep learning principles, the test set-up was named as Deep-Table. The objects in the manipulation tests are selected from everyday life and are common to be seen on modern office desktops. The visual object localization and recognition processes are performed from two distinct branches by deep convolutional neural network architectures. We present five of the possible cases, having different levels of information availability on the object weight and CoG in the experiments. The results confirm that using our algorithm, the robotic arm can move different types of objects successfully varying from several grams (empty bottle) to around 250 g (ceramic cup) without failure or tipping. The proposed method also shows that connecting the object recognition with load estimation and contact point further improves the performance characterized by a smoother motion. © 2019, Springer-Verlag London Ltd., part of Springer Nature.