Call for papers

   DownLoad

The 2023 RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP’23) will be held at Ala Moana Hotel, Honolulu, Hawaii, February 28 to March 3, 2023. The workshop is open to researchers from all over the world. In particular, the organizing committee encourages students to present their preliminary results that are not necessarily ready for publications in technical journals. Papers describing original works in all aspects of Nonlinear Circuits, Communications and Signal Processing are invited. Topics include, but are not limited to:

■ Nonlinear Circuits and Systems   ■ Communications   ■ Signal Processing
・Bifurcation and Chaos   ・Communication & Information System Security   ・Artificial Intelligence and Machine Learning
・Circuits and Systems   ・Communication Networks   ・Biomedical Signal Processing
・Complex Networks & Systems   ・IoT & Sensor Networks   ・Image & Video Signal Processing
・Control and Fuzzy Systems   ・Optical Communications   ・Signal Processing for Communications
・Evolutionary Computation   ・Wireless Communications   ・Speech & Language Processing

Paper Submission Information

Authors are invited to submit 1-page summaries through the workshop website:
https://www.ncsp.jp/NCSP23/
After the acceptance, authors will be required to submit the final paper (4-pages are required) for the workshop proceedings.

Submission Manual:

   ◆ NCSP'23 Finale Paper Submission Guideline

First Submission:

Authors are invited to submit 1-page summaries in PDF format through the paper submission site.

・Summaries must be in English.

・Summary should be within 1 page.

・There is no style file for submitting 1-page summaries but summary should be presented as follows: Tiltle, author name(s), affiliation(s), and main body.

・Summary can also include references.

・Summaries are recommended to be composed of four paragraphs (objective, methods, results, and conclusions).

・Before submitting 1-page summaries, authors should register your information at the workshop website.

・Only PDF format can be submitted.

Camera-Ready Submission:

After the notification of acceptance, author(s) must submit a camera-ready manuscript (maximum of four pages) in PDF format by January 20, 2023.

LaTex Style File and MS Word Template:

   ◆ ncsp23-authorskit.zip

   ◆ Latex style file

   ◆ sample LaTeX source file

   ◆ sample and guideline PDF file

   ◆ sample MS-Word file, recommended Word 2007 or later version

Submission Page:

Submission (click here)  Closed

Special Sessions

Proposals of organized special session are welcome and should be sent email to ss23@ncsp.jp by December 1 December 8, 2022.

Student Paper Award

NCSP'23 review committee members will evaluate a camera-ready paper and an oral presentation of an award-eligible person. Then, we will determine award winners based on the overall evaluation scores.

~Requirements for Eligible Person of NCSP'23 Student Paper Award~

1. You must be a student and the first author of the paper.

2. Your camera-ready paper must have four pages in length.

3. You must submit your camera-ready paper by its deadline.

4. You must check "Entry for Student Paper Award Contest" in the submission system when submitting your paper.

5. You must give an oral presentation (onsite or virtual) at the conference.
(Note that if you present your study on a recorded video, you might be disqualified from the award-eligible person.)

6. You must not be a past NCSP student paper award winner.

7. Whether your presentation is in a "Regular Session" or a "Special Session," you are eligible for the award if you fulfill the above 1-6.

Student Paper Award Winning Papers

1. Behavior Recognition and Monitoring System for Office Workers by Deep Learning from Two Surveillance Cameras,
    Paper ID:1, Atsushi Ogino (Konan University)

2. Combined Use of Federated Learning and Image Encryption for Privacy-Preserving Image Classification with Vision Transformer,
    Paper ID:9, Teru Nagamori (Tokyo Metropolitan University)

3. Design of Load-Independent Class-EF Inverter with Nonsinusoidal Output Current,
    Paper ID:12, Aoi Noda (Chiba Institute of Technology)

4. Color-NeuraCrypt: Privacy-Preserving Color-Image Classification Using Extended Random Neural Networks,
    Paper ID:13, Zheng Qi (Tokyo Metropolitan University)

5. Tackling Over-smoothing on Temporal Convolutional Networks for Operating Work Segmentation,
    Paper ID:16, Keisuke Nakamura (Shizuoka University)

6. Real-time Moving Blind Source Extraction based on Constant Separating Vector and Auxiliary Function Technique,
    Paper ID:29, Sihan Yuan (Waseda University)

7. Image classification with SVM for CMOS sensor generating a vector image,
    Paper ID:38, Haruto Oki (Ritsumeikan University)

8. Prefrontal Power Asymmetry Feature Extraction for Depression Severity in a Clinical Study,
    Paper ID:40, Gengtao Lin (Keio Universtiy)

9. Reservoir Computing using Cellular Automata for Predicting Time Series,
    Paper ID:43, Riku Tooyama (Nagaoka University of Technology)

10. Attitude Control of Biped Hopping Robot Using an Inertial Rotor,
       Paper ID:49, Ayumu Kato (Tokushima University)

11. Development of CNN to Estimate Depth Distribution Spectrometry of Soil,
       Paper ID:50, Mohd Azam Bin Mohd Pauzi (Kagawa University)

12. Deep Complex-Valued Neural Network-Based Triple-Path Mask and Steering Vector Estimation for Multi-channel Target Speech Separation,
       Paper ID:64, Mohan Qin (Waseda University)

13. Enhanced xASK-CodeSK for SLIPT,
       Paper ID:70, Yu Ichitsuka (Ibaraki University)

14. Zero-shot evaluation index based on robustness of CNN output,
       Paper ID:77, Chisato Takahashi (Tokyo City University)

15. Prediction Model of Wind Speed and Direction Using CNN and CLSTM with Vector Images Input,
       Paper ID:78, Hiroto Kanagawa (Tokushima University)

16. Multi-layer Cortical Learning Algorithm for Trend Changing Time-series Forecast,
       Paper ID:84, Kazushi Fujino (The University of Electro-Communications)

17. Improving Signal Detection Performance of Successive Interference Cancellation with Nonlinear System by Applying Stochastic Resonance,
       Paper ID:106, Yuta Tomida (Mie University)

18. High Speed Optimization of NOMA System Using Coherent Ising Machine in Dynamic Environment,
       Paper ID:107, Teppei Otsuka (Tokyo University of Science)

19. IRS Reflection Pattern Prediction Considering Receiver's Moving Speed,
       Paper ID:109, Yoshihiko Tsuchiya (Tokyo University of Science)

20. Performance Demonstration of Decentralized TDMA Based on Desynchronization of Nonlinear Oscillators with Different Coupling Schemes,
       Paper ID:118, Takuma Osada (Tokyo University of Science)

21. A Novel Energy Efficient Clustering Using a Sleep Awake Algorithm for Energy Harvesting Wireless Sensor Networks,
       Paper ID:120, Shuntaro Takie (Tokyo University of Science)

22. Routing and Caching Optimization in Autonomous Mobility-Assited Piggyback Network with mmWave Links,
       Paper ID:122, Daisuke Yamamoto (Tokyo University of Science)

23. Wireless Power Transfer System with Series Resonant Inverse Class-E Inverter,
       Paper ID:128, Yutaro Komiyama (Chiba University)

24. A Proposal of Hidden Screen-Camera Communication Systems Using Adversarial Examples on CNN Depth Estimation Model,
       Paper ID:133, Changseok Lee (Nagoya University)

25. A hardware-efficient FPGA neuron model toward virtual clinical trial of brain prosthesis,
       Paper ID:142, Haruto Suzuki (Hosei University)

26. Nonlinear Sound Processing Functions of An FPGA Integrated Cochlear Model,
       Paper ID:143, Yui Kishimoto (Hosei University)

Presentation instruction

*** PRESENTATION INSTRUCTION & NOTES TO THE AUDIENCE ***

All speakers and audiences must read the following instructions carefully.

(a) For All Speakers and Audiences

- We will conduct all sessions at NCSP'23 using Zoom. All speakers and audiences should install the Zoom application on their computers or tablets in advance.
- We will have up to three rooms using Zoom breakout sessions. After connecting to Zoom, please enter the breakout session room in which you present or listen.
- All speakers and audiences should display their name, affiliation, and role in the session (speaker, chair, audience) in Zoom as follows.

### Display example ###
[Chair] Hiroo Sekiya (Chiba Univ.)
[Speaker] Kosuke Sanada (Mie Univ.)

- Please do not record videos or take screenshots of presentations.
- We will inform you of the Zoom ID and passcode for NCSP'23 by email when the workshop date approaches.

(b) For All Speakers
- All Speakers have 18 minutes for their presentation.
(15 minutes for presentation + 3 minutes for Q&A).
- All speakers must screen share their slides via Zoom on their computers or tablets during the presentation.
- All speakers should connect to Zoom 10 minutes before the session starts.
- To ensure that your presentation goes smoothly, please check your screen share, camera, and microphone under the direction of your session chair in the session room.

(c) For Onsite Speakers
- Onsite speakers do not have to connect their computer or tablet to a projector in the session room because all speakers screen share their slides via Zoom during the presentation.
- We will provide Free Wi-Fi access in all session rooms.
- We will connect a Zoom-linked PC to a projector and show speakers' slides onto the screen in the room.
- We recommend that speakers use the pointer option in their presentation application.
- We do not allow onsite speakers to present virtually from hotels or other locations. Note that we allow some speakers in the special sessions to present virtually from other countries.
- Because Hawaii's climate is hot, NCSP'23 attendees need not wear formal clothes, such as a tie and a suit, in sessions and social events. We recommend that attendees wear an Aloha shirt, which is formal wear in Hawaii, according to Hawaii's culture.

(d) For Virtual Speakers
- Virtual speakers should turn on their cameras during their presentations in Zoom. Especially, an award-eligible person (student) must turn on their cameras because we need to check whether they are presenting. If an award-eligible person does not turn on their camera, we may remove them from the award review process.

(e) For All Audiences
- Virtual audiences can listen to the onsite presentations and ask questions virtually from other countries. However, due to the dependence on the wireless environment of the onsite venue, we conduct the onsite Zoom streaming on a best-effort style. We kindly ask for your understanding.
- To ensure that virtual audiences can hear as much as possible, we will provide a microphone for onsite audiences to ask questions.

Please do not hesitate to contact us if you have any questions.
Contact address: secretariat23@ncsp.jp