Skip to main content

miniextendr_api/
gc_protect.rs

1//! GC protection tools built on R's PROTECT stack.
2//!
3//! This module provides RAII wrappers around R's GC protection primitives.
4//!
5//! # Submodules
6//!
7//! | Module | Contents |
8//! |--------|----------|
9//! | [`tls`] | Thread-local convenience API — `tls::protect(x)` without passing `&ProtectScope` |
10//!
11//! # Core Types
12//!
13//! - [`ProtectScope`] — RAII scope that calls `UNPROTECT(n)` on drop
14//! - [`OwnedProtect`] — single-value RAII protect/unprotect
15//! - [`Root`] — lifetime-tied handle to a protected SEXP
16//! - [`ReprotectSlot`] — `PROTECT_WITH_INDEX` + `REPROTECT` for mutable slots
17//!
18//! # Protection Strategies in miniextendr
19//!
20//! miniextendr provides three complementary protection mechanisms for different scenarios:
21//!
22//! | Strategy | Module | Lifetime | Release Order | Use Case |
23//! |----------|--------|----------|---------------|----------|
24//! | **PROTECT stack** | [`gc_protect`](crate::gc_protect) | Within `.Call` | LIFO (stack) | Temporary allocations |
25//! | **Preserve list** | [`preserve`](crate::preserve) | Across `.Call`s | Any order | Long-lived R objects |
26//! | **R ownership** | [`ExternalPtr`](struct@crate::ExternalPtr) | Until R GCs | R decides | Rust data owned by R |
27//!
28//! ## When to Use Each
29//!
30//! **Use `gc_protect` (this module) when:**
31//! - You allocate R objects during a `.Call` and need them protected until return
32//! - You want RAII-based automatic balancing of PROTECT/UNPROTECT
33//! - Protection is short-lived (within a single function)
34//!
35//! **Use [`preserve`](crate::preserve) when:**
36//! - Objects must survive across multiple `.Call` invocations
37//! - You need to release protections in arbitrary order
38//! - Example: [`RAllocator`](crate::RAllocator) backing memory
39//!
40//! **Use [`ExternalPtr`](struct@crate::ExternalPtr) when:**
41//! - You want R to own a Rust value
42//! - The Rust value should be dropped when R garbage collects the pointer
43//! - You're exposing Rust structs to R code
44//!
45//! ## Visual Overview
46//!
47//! ```text
48//! ┌─────────────────────────────────────────────────────────────────┐
49//! │  .Call("my_func", x)                                            │
50//! │  ┌──────────────────────────────────────────────────────────┐   │
51//! │  │  ProtectScope::new()                                     │   │
52//! │  │  ├── protect(Rf_allocVector(...))  // temp allocation    │   │
53//! │  │  ├── protect(Rf_allocVector(...))  // another temp       │   │
54//! │  │  └── UNPROTECT(n) on scope drop                          │   │
55//! │  └──────────────────────────────────────────────────────────┘   │
56//! │                          ↓ return SEXP                          │
57//! └─────────────────────────────────────────────────────────────────┘
58//!
59//! ┌─────────────────────────────────────────────────────────────────┐
60//! │  preserve (objects surviving across .Calls)                     │
61//! │  ├── preserve::insert(sexp)   // add to linked list             │
62//! │  ├── ... multiple .Calls ...  // object stays protected         │
63//! │  └── preserve::release(cell)  // remove when done               │
64//! └─────────────────────────────────────────────────────────────────┘
65//!
66//! ┌─────────────────────────────────────────────────────────────────┐
67//! │  ExternalPtr<MyStruct> (R owns Rust data)                       │
68//! │  ├── Construction: temporary Rf_protect                         │
69//! │  ├── Return to R → R owns the EXTPTRSXP                         │
70//! │  └── R GC → finalizer runs → Rust Drop executes                 │
71//! └─────────────────────────────────────────────────────────────────┘
72//! ```
73//!
74//! # Types in This Module
75//!
76//! This module provides RAII wrappers around R's GC protection primitives:
77//!
78//! | Type | Purpose |
79//! |------|---------|
80//! | [`ProtectScope`] | Batch protection with automatic `UNPROTECT(n)` on drop |
81//! | [`Root<'scope>`] | Lightweight handle tied to a scope's lifetime |
82//! | [`OwnedProtect`] | Single-value RAII guard for simple cases |
83//! | [`ReprotectSlot<'scope>`] | Protected slot supporting replace-under-protection |
84//!
85//! # Design Principles
86//!
87//! - `ProtectScope` owns the responsibility of calling `UNPROTECT(n)`
88//! - `Root<'a>` is a move-friendly, non-dropping handle whose lifetime ties to the scope
89//! - `ReprotectSlot<'a>` supports replace-under-protection via `PROTECT_WITH_INDEX`/`REPROTECT`
90//!
91//! # Safety Model
92//!
93//! These tools are `unsafe` to create because they require:
94//!
95//! 1. **Running on the R main thread** - R's API is not thread-safe
96//! 2. **No panics across FFI** - Rust panics must not unwind across C boundary
97//! 3. **Understanding R errors** - If R raises an error (`longjmp`), Rust destructors
98//!    will not run, so scope-based unprotection will leak
99//!
100//! For cleanup that survives R errors, use `R_UnwindProtect` boundaries in your
101//! `.Call` trampoline (see [`unwind_protect`](crate::unwind_protect)).
102//!
103//! # Example
104//!
105//! ```ignore
106//! use miniextendr_api::gc_protect::ProtectScope;
107//! use miniextendr_api::ffi::SEXP;
108//!
109//! unsafe fn process_vectors(x: SEXP, y: SEXP) -> SEXP {
110//!     let scope = ProtectScope::new();
111//!
112//!     // Protect multiple values
113//!     let x = scope.protect(x);
114//!     let y = scope.protect(y);
115//!
116//!     // Work with protected values...
117//!     let result = scope.protect(some_r_function(x.get(), y.get()));
118//!
119//!     result.into_raw()
120//! } // UNPROTECT(3) called automatically
121//! ```
122//!
123//! # Container Insertion Patterns
124//!
125//! When building containers (lists, character vectors), children need protection
126//! between allocation and insertion:
127//!
128//! ```ignore
129//! // WRONG - child unprotected between allocation and SET_VECTOR_ELT
130//! let child = Rf_allocVector(REALSXP, 10);  // unprotected!
131//! list.set_vector_elt(0, child);           // GC could occur before this!
132//!
133//! // CORRECT - use safe insertion methods
134//! let list = List::from_raw(scope.alloc_vecsxp(n).into_raw());
135//! for i in 0..n {
136//!     let child = Rf_allocVector(REALSXP, 10);
137//!     list.set_elt(i, child);  // protects child during insertion
138//! }
139//!
140//! // EFFICIENT - use ListBuilder with scope
141//! let builder = ListBuilder::new(&scope, n);
142//! for i in 0..n {
143//!     let child = scope.alloc_real(10).into_raw();
144//!     builder.set(i, child);  // child already protected by scope
145//! }
146//! ```
147//!
148//! See [`List::set_elt`](crate::list::List::set_elt),
149//! [`ListBuilder`](crate::list::ListBuilder), and
150//! [`StrVec::set_str`](crate::strvec::StrVec::set_str) for safe container APIs.
151//!
152//! # Reassignment with `ReprotectSlot`
153//!
154//! Use [`ReprotectSlot`] when you need to reassign a protected value multiple times
155//! without growing the protection stack:
156//!
157//! ```ignore
158//! let slot = scope.protect_with_index(initial_value);
159//! for item in items {
160//!     let new_value = process(slot.get(), item);
161//!     slot.set(new_value);  // R_Reprotect, stack count unchanged
162//! }
163//! ```
164//!
165//! This avoids the LIFO drop-order pitfall of reassigning `OwnedProtect` guards.
166
167use crate::ffi::{
168    R_NewEnv, R_ProtectWithIndex, R_Reprotect, R_xlen_t, RNativeType, Rf_allocList, Rf_allocMatrix,
169    Rf_allocVector, Rf_protect, Rf_unprotect, SEXP, SEXPTYPE, SexpExt,
170};
171use core::cell::Cell;
172use core::marker::PhantomData;
173use std::rc::Rc;
174
175/// R's PROTECT_INDEX type (just `c_int` under the hood).
176pub type ProtectIndex = ::std::os::raw::c_int;
177
178/// Enforces `!Send + !Sync` (R API is not thread-safe).
179type NoSendSync = PhantomData<Rc<()>>;
180
181// region: Protector trait
182
183/// A scope-like GC protection backend.
184///
185/// Functions that allocate multiple intermediate SEXPs can take `&mut impl Protector`
186/// to be generic over the protection mechanism. All protected SEXPs stay protected
187/// until the protector itself is dropped — there is no individual release via this
188/// trait.
189///
190/// For individual release by key, use [`ProtectPool::insert`](crate::protect_pool::ProtectPool::insert)
191/// and [`ProtectPool::release`](crate::protect_pool::ProtectPool::release) directly.
192///
193/// # Safety
194///
195/// Implementations must ensure that the returned SEXP remains protected from GC
196/// for at least as long as the protector is alive. Callers must not use the
197/// returned SEXP after the protector is dropped.
198///
199/// All methods must be called from the R main thread.
200pub trait Protector {
201    /// Protect a SEXP from garbage collection.
202    ///
203    /// Returns the same SEXP (for convenience in chaining). The SEXP is now
204    /// protected and will remain so until the protector is dropped.
205    ///
206    /// The key (if any) is managed internally — use the pool's direct API
207    /// (`insert`/`release`) if you need individual release.
208    ///
209    /// # Safety
210    ///
211    /// Must be called from the R main thread. `sexp` must be a valid SEXP.
212    unsafe fn protect(&mut self, sexp: SEXP) -> SEXP;
213}
214
215impl Protector for ProtectScope {
216    #[inline]
217    unsafe fn protect(&mut self, sexp: SEXP) -> SEXP {
218        unsafe { self.protect_raw(sexp) }
219    }
220}
221
222impl Protector for crate::protect_pool::ProtectPool {
223    #[inline]
224    unsafe fn protect(&mut self, sexp: SEXP) -> SEXP {
225        // Key is intentionally discarded — Protector is scope-like (all released
226        // on drop). For individual release, use pool.insert()/pool.release() directly.
227        unsafe { self.insert(sexp) };
228        sexp
229    }
230}
231
232// endregion
233
234// region: ProtectScope
235
236/// A scope that automatically balances `UNPROTECT(n)` on drop.
237///
238/// This is the primary tool for managing GC protection in batch operations.
239/// Each call to [`protect`][Self::protect] or [`protect_with_index`][Self::protect_with_index]
240/// increments an internal counter; when the scope is dropped, `UNPROTECT(n)` is called.
241///
242/// # Example
243///
244/// ```ignore
245/// unsafe fn my_call(x: SEXP, y: SEXP) -> SEXP {
246///     let scope = ProtectScope::new();
247///     let x = scope.protect(x);
248///     let y = scope.protect(y);
249///
250///     // Both x and y are protected until scope drops
251///     let result = scope.protect(some_operation(x.get(), y.get()));
252///     result.get()
253/// } // UNPROTECT(3)
254/// ```
255///
256/// # Nested Scopes
257///
258/// Scopes can be nested. Each scope tracks only its own protections:
259///
260/// ```ignore
261/// unsafe fn outer(x: SEXP) -> SEXP {
262///     let scope = ProtectScope::new();
263///     let x = scope.protect(x);
264///
265///     let result = helper(&scope, x.get());
266///     scope.protect(result).get()
267/// } // UNPROTECT(2)
268///
269/// unsafe fn helper(_parent: &ProtectScope, x: SEXP) -> SEXP {
270///     let scope = ProtectScope::new();
271///     let temp = scope.protect(allocate_something());
272///     combine(x, temp.get())
273/// } // UNPROTECT(1) - only this scope's protections
274/// ```
275pub struct ProtectScope {
276    n: Cell<i32>,
277    armed: Cell<bool>,
278    _nosend: NoSendSync,
279}
280
281impl ProtectScope {
282    /// Create a new protection scope.
283    ///
284    /// # Safety
285    ///
286    /// Must be called from the R main thread.
287    #[inline]
288    pub unsafe fn new() -> Self {
289        Self {
290            n: Cell::new(0),
291            armed: Cell::new(true),
292            _nosend: PhantomData,
293        }
294    }
295
296    /// Protect `x` and return a rooted handle tied to this scope.
297    ///
298    /// This always calls `Rf_protect`. The protection is released when
299    /// the scope is dropped (along with all other protections in this scope).
300    ///
301    /// # Safety
302    ///
303    /// - Must be called from the R main thread
304    /// - `x` must be a valid SEXP
305    #[inline]
306    pub unsafe fn protect<'a>(&'a self, x: SEXP) -> Root<'a> {
307        let y = unsafe { Rf_protect(x) };
308        self.n.set(self.n.get() + 1);
309        Root {
310            sexp: y,
311            _scope: PhantomData,
312        }
313    }
314
315    /// Protect and return the raw `SEXP` (sometimes more convenient).
316    ///
317    /// # Safety
318    ///
319    /// Same as [`protect`][Self::protect].
320    #[inline]
321    pub unsafe fn protect_raw(&self, x: SEXP) -> SEXP {
322        let y = unsafe { Rf_protect(x) };
323        self.n.set(self.n.get() + 1);
324        y
325    }
326
327    /// Protect `x` with an index slot so it can be replaced later via [`R_Reprotect`].
328    ///
329    /// Use this when you need to update a protected value in-place without
330    /// growing the protection stack.
331    ///
332    /// # Safety
333    ///
334    /// - Must be called from the R main thread
335    /// - `x` must be a valid SEXP
336    ///
337    /// # Example
338    ///
339    /// ```ignore
340    /// unsafe fn accumulate(values: &[SEXP]) -> SEXP {
341    ///     let scope = ProtectScope::new();
342    ///     let slot = scope.protect_with_index(values[0]);
343    ///
344    ///     for &v in &values[1..] {
345    ///         let combined = combine(slot.get(), v);
346    ///         slot.set(combined);  // Reprotect without growing stack
347    ///     }
348    ///
349    ///     slot.get()
350    /// }
351    /// ```
352    #[inline]
353    pub unsafe fn protect_with_index<'a>(&'a self, x: SEXP) -> ReprotectSlot<'a> {
354        let mut idx: ProtectIndex = 0;
355        unsafe { R_ProtectWithIndex(x, std::ptr::from_mut(&mut idx)) };
356        self.n.set(self.n.get() + 1);
357        ReprotectSlot {
358            idx,
359            cur: Cell::new(x),
360            _scope: PhantomData,
361            _nosend: PhantomData,
362        }
363    }
364
365    /// Protect two values at once (convenience method).
366    ///
367    /// # Safety
368    ///
369    /// Same as [`protect`][Self::protect].
370    #[inline]
371    pub unsafe fn protect2<'a>(&'a self, a: SEXP, b: SEXP) -> (Root<'a>, Root<'a>) {
372        // SAFETY: caller guarantees R main thread and valid SEXPs
373        unsafe { (self.protect(a), self.protect(b)) }
374    }
375
376    /// Protect three values at once (convenience method).
377    ///
378    /// # Safety
379    ///
380    /// Same as [`protect`][Self::protect].
381    #[inline]
382    pub unsafe fn protect3<'a>(
383        &'a self,
384        a: SEXP,
385        b: SEXP,
386        c: SEXP,
387    ) -> (Root<'a>, Root<'a>, Root<'a>) {
388        // SAFETY: caller guarantees R main thread and valid SEXPs
389        unsafe { (self.protect(a), self.protect(b), self.protect(c)) }
390    }
391
392    /// Return the current protection count.
393    #[inline]
394    pub fn count(&self) -> i32 {
395        self.n.get()
396    }
397
398    /// Escape hatch: disable `UNPROTECT` on drop.
399    ///
400    /// After calling this, the scope will **not** unprotect its values when dropped.
401    /// You become responsible for ensuring correct unprotection.
402    ///
403    /// # Safety
404    ///
405    /// You must ensure the protects performed in this scope are correctly
406    /// unprotected elsewhere, or you will leak protect stack entries.
407    #[inline]
408    pub unsafe fn disarm(&self) {
409        self.armed.set(false);
410    }
411
412    /// Re-arm a previously disarmed scope.
413    ///
414    /// # Safety
415    ///
416    /// Only call if you know the scope was disarmed and you want to restore
417    /// automatic unprotection. Be careful not to double-unprotect.
418    #[inline]
419    pub unsafe fn rearm(&self) {
420        self.armed.set(true);
421    }
422
423    // region: Allocation + Protection Helpers
424
425    /// Allocate a vector of the given type and length, and immediately protect it.
426    ///
427    /// This combines allocation and protection in a single step, eliminating the
428    /// GC gap that exists when you separately allocate and then protect.
429    ///
430    /// # Safety
431    ///
432    /// - Must be called from the R main thread
433    /// - Only protects the newly allocated object; does not protect other live
434    ///   unprotected objects during allocation
435    ///
436    /// # Example
437    ///
438    /// ```ignore
439    /// unsafe fn make_ints(n: R_xlen_t) -> SEXP {
440    ///     let scope = ProtectScope::new();
441    ///     let vec = scope.alloc_vector(SEXPTYPE::INTSXP, n);
442    ///     // fill via INTEGER(vec.get()) ...
443    ///     vec.get()
444    /// }
445    /// ```
446    #[inline]
447    pub unsafe fn alloc_vector<'a>(&'a self, ty: SEXPTYPE, n: R_xlen_t) -> Root<'a> {
448        // SAFETY: caller guarantees R main thread
449        let sexp = unsafe { Rf_allocVector(ty, n) };
450        unsafe { self.protect(sexp) }
451    }
452
453    /// Allocate a matrix of the given type and dimensions, and immediately protect it.
454    ///
455    /// # Safety
456    ///
457    /// Same as [`alloc_vector`][Self::alloc_vector].
458    #[inline]
459    pub unsafe fn alloc_matrix<'a>(&'a self, ty: SEXPTYPE, nrow: i32, ncol: i32) -> Root<'a> {
460        let sexp = unsafe { Rf_allocMatrix(ty, nrow, ncol) };
461        unsafe { self.protect(sexp) }
462    }
463
464    /// Allocate a list (VECSXP) of the given length and immediately protect it.
465    ///
466    /// # Safety
467    ///
468    /// Same as [`alloc_vector`][Self::alloc_vector].
469    #[inline]
470    pub unsafe fn alloc_list<'a>(&'a self, n: i32) -> Root<'a> {
471        let sexp = unsafe { Rf_allocList(n) };
472        unsafe { self.protect(sexp) }
473    }
474
475    /// Allocate a STRSXP (character vector) of the given length and immediately protect it.
476    ///
477    /// # Safety
478    ///
479    /// Same as [`alloc_vector`][Self::alloc_vector].
480    #[inline]
481    pub unsafe fn alloc_strsxp<'a>(&'a self, n: usize) -> Root<'a> {
482        unsafe { self.alloc_character(n) }
483    }
484
485    /// Allocate a VECSXP (generic list) of the given length and immediately protect it.
486    ///
487    /// # Safety
488    ///
489    /// Same as [`alloc_vector`][Self::alloc_vector].
490    #[inline]
491    pub unsafe fn alloc_vecsxp<'a>(&'a self, n: usize) -> Root<'a> {
492        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
493        unsafe { self.alloc_vector(SEXPTYPE::VECSXP, len) }
494    }
495
496    // region: Typed vector allocation shortcuts
497
498    /// Allocate an integer vector (INTSXP), protected.
499    ///
500    /// # Safety
501    ///
502    /// Must be called from the R main thread.
503    #[inline]
504    pub unsafe fn alloc_integer<'a>(&'a self, n: usize) -> Root<'a> {
505        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
506        unsafe { self.alloc_vector(SEXPTYPE::INTSXP, len) }
507    }
508
509    /// Allocate a real vector (REALSXP), protected.
510    ///
511    /// # Safety
512    ///
513    /// Must be called from the R main thread.
514    #[inline]
515    pub unsafe fn alloc_real<'a>(&'a self, n: usize) -> Root<'a> {
516        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
517        unsafe { self.alloc_vector(SEXPTYPE::REALSXP, len) }
518    }
519
520    /// Allocate a logical vector (LGLSXP), protected.
521    ///
522    /// # Safety
523    ///
524    /// Must be called from the R main thread.
525    #[inline]
526    pub unsafe fn alloc_logical<'a>(&'a self, n: usize) -> Root<'a> {
527        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
528        unsafe { self.alloc_vector(SEXPTYPE::LGLSXP, len) }
529    }
530
531    /// Allocate a raw vector (RAWSXP), protected.
532    ///
533    /// # Safety
534    ///
535    /// Must be called from the R main thread.
536    #[inline]
537    pub unsafe fn alloc_raw<'a>(&'a self, n: usize) -> Root<'a> {
538        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
539        unsafe { self.alloc_vector(SEXPTYPE::RAWSXP, len) }
540    }
541
542    /// Allocate a complex vector (CPLXSXP), protected.
543    ///
544    /// # Safety
545    ///
546    /// Must be called from the R main thread.
547    #[inline]
548    pub unsafe fn alloc_complex<'a>(&'a self, n: usize) -> Root<'a> {
549        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
550        unsafe { self.alloc_vector(SEXPTYPE::CPLXSXP, len) }
551    }
552
553    /// Allocate a character vector (STRSXP), protected.
554    ///
555    /// # Safety
556    ///
557    /// Must be called from the R main thread.
558    #[inline]
559    pub unsafe fn alloc_character<'a>(&'a self, n: usize) -> Root<'a> {
560        let len = R_xlen_t::try_from(n).expect("length exceeds R_xlen_t");
561        unsafe { self.alloc_vector(SEXPTYPE::STRSXP, len) }
562    }
563
564    // endregion
565
566    // region: Scalar constructors (allocate + set + protect)
567
568    /// Create a scalar integer (length-1 INTSXP), protected.
569    ///
570    /// # Safety
571    ///
572    /// Must be called from the R main thread.
573    #[inline]
574    pub unsafe fn scalar_integer<'a>(&'a self, x: i32) -> Root<'a> {
575        unsafe { self.protect(SEXP::scalar_integer(x)) }
576    }
577
578    /// Create a scalar real (length-1 REALSXP), protected.
579    ///
580    /// # Safety
581    ///
582    /// Must be called from the R main thread.
583    #[inline]
584    pub unsafe fn scalar_real<'a>(&'a self, x: f64) -> Root<'a> {
585        unsafe { self.protect(SEXP::scalar_real(x)) }
586    }
587
588    /// Create a scalar logical (length-1 LGLSXP), protected.
589    ///
590    /// # Safety
591    ///
592    /// Must be called from the R main thread.
593    #[inline]
594    pub unsafe fn scalar_logical<'a>(&'a self, x: bool) -> Root<'a> {
595        unsafe { self.protect(SEXP::scalar_logical(x)) }
596    }
597
598    /// Create a scalar complex (length-1 CPLXSXP), protected.
599    ///
600    /// # Safety
601    ///
602    /// Must be called from the R main thread.
603    #[inline]
604    pub unsafe fn scalar_complex<'a>(&'a self, x: crate::ffi::Rcomplex) -> Root<'a> {
605        unsafe { self.protect(SEXP::scalar_complex(x)) }
606    }
607
608    /// Create a scalar raw (length-1 RAWSXP), protected.
609    ///
610    /// # Safety
611    ///
612    /// Must be called from the R main thread.
613    #[inline]
614    pub unsafe fn scalar_raw<'a>(&'a self, x: u8) -> Root<'a> {
615        unsafe { self.protect(SEXP::scalar_raw(x)) }
616    }
617
618    /// Create a scalar string (length-1 STRSXP) from a Rust `&str`, protected.
619    ///
620    /// # Safety
621    ///
622    /// Must be called from the R main thread.
623    #[inline]
624    pub unsafe fn scalar_string<'a>(&'a self, s: &str) -> Root<'a> {
625        unsafe { self.protect(SEXP::scalar_string(SEXP::charsxp(s))) }
626    }
627
628    // endregion
629
630    // region: CHARSXP, duplication, coercion, environment
631
632    /// Create a CHARSXP from a Rust `&str`, protected.
633    ///
634    /// # Safety
635    ///
636    /// Must be called from the R main thread.
637    #[inline]
638    pub unsafe fn mkchar<'a>(&'a self, s: &str) -> Root<'a> {
639        unsafe { self.protect(SEXP::charsxp(s)) }
640    }
641
642    /// Deep-duplicate a SEXP, protected.
643    ///
644    /// # Safety
645    ///
646    /// Must be called from the R main thread. `x` must be a valid SEXP.
647    #[inline]
648    pub unsafe fn duplicate<'a>(&'a self, x: SEXP) -> Root<'a> {
649        unsafe { self.protect(x.duplicate()) }
650    }
651
652    /// Shallow-duplicate a SEXP, protected.
653    ///
654    /// # Safety
655    ///
656    /// Must be called from the R main thread. `x` must be a valid SEXP.
657    #[inline]
658    pub unsafe fn shallow_duplicate<'a>(&'a self, x: SEXP) -> Root<'a> {
659        unsafe { self.protect(x.shallow_duplicate()) }
660    }
661
662    /// Coerce a SEXP to a different type, protected.
663    ///
664    /// # Safety
665    ///
666    /// Must be called from the R main thread. `x` must be a valid SEXP.
667    #[inline]
668    pub unsafe fn coerce<'a>(&'a self, x: SEXP, target: SEXPTYPE) -> Root<'a> {
669        unsafe { self.protect(x.coerce(target)) }
670    }
671
672    /// Create a new environment, protected.
673    ///
674    /// # Safety
675    ///
676    /// Must be called from the R main thread.
677    #[inline]
678    pub unsafe fn new_env<'a>(&'a self, parent: SEXP, hash: bool, size: i32) -> Root<'a> {
679        unsafe {
680            self.protect(R_NewEnv(
681                parent,
682                if hash {
683                    crate::ffi::Rboolean::TRUE
684                } else {
685                    crate::ffi::Rboolean::FALSE
686                },
687                size,
688            ))
689        }
690    }
691
692    // endregion
693
694    /// Create a `Root<'a>` for an already-protected SEXP without adding protection.
695    ///
696    /// This is useful when you have a SEXP that is already protected by some other
697    /// mechanism (e.g., a `ReprotectSlot`) and want to return it as a `Root` tied
698    /// to this scope's lifetime for API consistency.
699    ///
700    /// # Safety
701    ///
702    /// - The caller must ensure `sexp` is already protected and will remain
703    ///   protected for at least the lifetime of this scope
704    /// - Must be called from the R main thread
705    #[inline]
706    pub unsafe fn rooted<'a>(&'a self, sexp: SEXP) -> Root<'a> {
707        Root {
708            sexp,
709            _scope: PhantomData,
710        }
711    }
712    // endregion
713
714    // region: Iterator Collection
715
716    /// Collect an iterator into a typed R vector.
717    ///
718    /// This allocates once, protects, and fills directly - the most efficient pattern
719    /// for typed vectors. The element type `T` determines the R vector type via
720    /// the [`RNativeType`] trait.
721    ///
722    /// # Type Mapping
723    ///
724    /// | Rust Type | R Vector Type |
725    /// |-----------|---------------|
726    /// | `i32` | `INTSXP` |
727    /// | `f64` | `REALSXP` |
728    /// | `u8` | `RAWSXP` |
729    /// | [`RLogical`](crate::ffi::RLogical) | `LGLSXP` |
730    /// | [`Rcomplex`](crate::ffi::Rcomplex) | `CPLXSXP` |
731    ///
732    /// # Safety
733    ///
734    /// Must be called from the R main thread.
735    ///
736    /// # Example
737    ///
738    /// ```ignore
739    /// unsafe fn squares(n: usize) -> SEXP {
740    ///     let scope = ProtectScope::new();
741    ///     // Type inferred from iterator
742    ///     scope.collect((0..n).map(|i| (i * i) as i32)).get()
743    /// }
744    /// ```
745    ///
746    /// # Unknown Length
747    ///
748    /// For iterators without exact size (e.g., `filter`), collect to `Vec` first:
749    ///
750    /// ```ignore
751    /// let evens: Vec<i32> = data.iter().filter(|x| *x % 2 == 0).copied().collect();
752    /// scope.collect(evens)
753    /// ```
754    #[inline]
755    pub unsafe fn collect<'a, T, I>(&'a self, iter: I) -> Root<'a>
756    where
757        T: RNativeType,
758        I: IntoIterator<Item = T>,
759        I::IntoIter: ExactSizeIterator,
760    {
761        let iter = iter.into_iter();
762        let len = iter.len();
763
764        let vec = unsafe { self.alloc_vector(T::SEXP_TYPE, len as R_xlen_t) };
765        let ptr = unsafe { T::dataptr_mut(vec.get()) };
766
767        for (i, value) in iter.enumerate() {
768            unsafe { ptr.add(i).write(value) };
769        }
770
771        vec
772    }
773}
774
775impl Drop for ProtectScope {
776    #[inline]
777    fn drop(&mut self) {
778        if !self.armed.get() {
779            return;
780        }
781        let n = self.n.replace(0);
782        if n > 0 {
783            unsafe { Rf_unprotect(n) };
784        }
785    }
786}
787
788impl Default for ProtectScope {
789    /// Create a new scope. Equivalent to `unsafe { ProtectScope::new() }`.
790    ///
791    /// # Safety
792    ///
793    /// The caller must ensure this is called from the R main thread.
794    #[inline]
795    fn default() -> Self {
796        // SAFETY: This is a foot-gun but matches the pattern of other R interop code.
797        // Users should prefer `unsafe { ProtectScope::new() }` for clarity.
798        unsafe { Self::new() }
799    }
800}
801// endregion
802
803// region: Root
804
805/// A rooted SEXP tied to the lifetime of a [`ProtectScope`].
806///
807/// This type has **no `Drop`**. The scope owns unprotection responsibility.
808/// This makes `Root` cheap to move and copy (it's just a pointer + lifetime).
809///
810/// # Lifetime
811///
812/// The `'a` lifetime ties the root to its creating scope. The compiler ensures
813/// you cannot use the root after the scope has been dropped.
814#[derive(Clone, Copy)]
815pub struct Root<'a> {
816    sexp: SEXP,
817    _scope: PhantomData<&'a ProtectScope>,
818}
819
820impl<'a> Root<'a> {
821    /// Get the underlying SEXP.
822    #[inline]
823    pub fn get(&self) -> SEXP {
824        self.sexp
825    }
826
827    /// Consume the root and return the underlying SEXP.
828    ///
829    /// The SEXP remains protected until the scope drops.
830    #[inline]
831    pub fn into_raw(self) -> SEXP {
832        self.sexp
833    }
834}
835
836impl<'a> std::ops::Deref for Root<'a> {
837    type Target = SEXP;
838
839    #[inline]
840    fn deref(&self) -> &Self::Target {
841        &self.sexp
842    }
843}
844// endregion
845
846// region: OwnedProtect
847
848/// A single-object RAII guard: `PROTECT` on create, `UNPROTECT(1)` on drop.
849///
850/// Use this for simple cases where you're protecting a single value and
851/// don't need the batching benefits of [`ProtectScope`].
852///
853/// # Example
854///
855/// ```ignore
856/// unsafe fn allocate_and_fill() -> SEXP {
857///     let guard = OwnedProtect::new(Rf_allocVector(REALSXP, 10));
858///     fill_vector(guard.get());
859///     // Return the SEXP - guard drops and unprotects on this line.
860///     // This is safe because no GC can occur between unprotect and return.
861///     guard.get()
862/// }
863/// ```
864///
865/// # Warning: Stack Ordering
866///
867/// `OwnedProtect` uses `UNPROTECT(1)`, which removes the **top** of the protection
868/// stack. If you have nested protections from other sources, the drop order matters!
869///
870/// For complex scenarios, prefer [`ProtectScope`] which unprotects all its values
871/// at once when dropped.
872pub struct OwnedProtect {
873    sexp: SEXP,
874    armed: bool,
875    _nosend: NoSendSync,
876}
877
878impl OwnedProtect {
879    /// Create a new protection guard for `x`.
880    ///
881    /// Calls `Rf_protect(x)` immediately.
882    ///
883    /// # Safety
884    ///
885    /// - Must be called from the R main thread
886    /// - `x` must be a valid SEXP
887    #[inline]
888    pub unsafe fn new(x: SEXP) -> Self {
889        let y = unsafe { Rf_protect(x) };
890        Self {
891            sexp: y,
892            armed: true,
893            _nosend: PhantomData,
894        }
895    }
896
897    /// Get the protected SEXP.
898    #[inline]
899    pub fn get(&self) -> SEXP {
900        self.sexp
901    }
902
903    /// Escape hatch: do not `UNPROTECT(1)` on drop.
904    ///
905    /// # Safety
906    ///
907    /// Leaks one protection entry unless unprotected elsewhere.
908    #[inline]
909    pub unsafe fn forget(mut self) {
910        self.armed = false;
911        core::mem::forget(self);
912    }
913}
914
915impl Drop for OwnedProtect {
916    #[inline]
917    fn drop(&mut self) {
918        if self.armed {
919            unsafe { Rf_unprotect(1) };
920        }
921    }
922}
923
924impl std::ops::Deref for OwnedProtect {
925    type Target = SEXP;
926
927    #[inline]
928    fn deref(&self) -> &Self::Target {
929        &self.sexp
930    }
931}
932// endregion
933
934// region: ReprotectSlot
935
936/// A protected slot created with `R_ProtectWithIndex` and updated with `R_Reprotect`.
937///
938/// This allows updating a protected value in-place without growing the protection
939/// stack. Useful for loops that repeatedly allocate and update a value.
940///
941/// The slot is valid only while the creating [`ProtectScope`] is alive.
942///
943/// # When to Use `ReprotectSlot`
944///
945/// Use `ReprotectSlot` when you need to **reassign a protected value** multiple times:
946///
947/// | Pattern | Use | Why |
948/// |---------|-----|-----|
949/// | Accumulator loop | `ReprotectSlot` | Repeatedly replace result without stack growth |
950/// | Single allocation | `ProtectScope::protect` | Simpler, no reassignment needed |
951/// | Child insertion | `List::set_elt` | Container handles child protection |
952///
953/// # Warning: RAII Assignment Pitfall
954///
955/// R's PROTECT stack is LIFO. Rust's RAII drop order can cause problems:
956///
957/// ```ignore
958/// // WRONG - can unprotect the new value instead of the old!
959/// let mut guard = OwnedProtect::new(old_value);
960/// guard = OwnedProtect::new(new_value);  // Old guard drops AFTER new is assigned
961/// ```
962///
963/// `ReprotectSlot` avoids this by using `R_Reprotect` which replaces in-place:
964///
965/// ```ignore
966/// // CORRECT - always keeps exactly one slot protected
967/// let slot = scope.protect_with_index(old_value);
968/// slot.set(new_value);  // R_Reprotect, no stack change
969/// ```
970///
971/// # Examples
972///
973/// ## Accumulator Pattern
974///
975/// ```ignore
976/// unsafe fn sum_allocated_vectors(n: i32) -> SEXP {
977///     let scope = ProtectScope::new();
978///
979///     // Initial allocation
980///     let slot = scope.protect_with_index(Rf_allocVector(REALSXP, 10));
981///
982///     for i in 0..n {
983///         // Each iteration allocates a new vector
984///         let new_vec = compute_step(slot.get(), i);
985///         slot.set(new_vec);  // Replace without growing protect stack
986///     }
987///
988///     slot.get()
989/// }
990/// ```
991///
992/// ## Starting with Empty Slot
993///
994/// ```ignore
995/// unsafe fn build_result(items: &[Input]) -> SEXP {
996///     let scope = ProtectScope::new();
997///
998///     // Start with R_NilValue, replace with first real result
999///     let slot = scope.protect_with_index(R_NilValue);
1000///
1001///     for (i, item) in items.iter().enumerate() {
1002///         let result = process_item(item, slot.get());
1003///         slot.set(result);
1004///     }
1005///
1006///     slot.get()
1007/// }
1008/// ```
1009///
1010/// ## Multiple Slots
1011///
1012/// ```ignore
1013/// unsafe fn merge_sorted(a: SEXP, b: SEXP) -> SEXP {
1014///     let scope = ProtectScope::new();
1015///
1016///     let slot_a = scope.protect_with_index(a);
1017///     let slot_b = scope.protect_with_index(b);
1018///     let result = scope.protect_with_index(R_NilValue);
1019///
1020///     // Process both inputs, updating result
1021///     while !is_empty(slot_a.get()) && !is_empty(slot_b.get()) {
1022///         let merged = merge_next(slot_a.get(), slot_b.get());
1023///         result.set(merged);
1024///         // ... update slot_a and slot_b as needed
1025///     }
1026///
1027///     result.get()
1028/// }
1029/// ```
1030pub struct ReprotectSlot<'a> {
1031    idx: ProtectIndex,
1032    cur: Cell<SEXP>,
1033    _scope: PhantomData<&'a ProtectScope>,
1034    _nosend: NoSendSync,
1035}
1036
1037impl<'a> ReprotectSlot<'a> {
1038    /// Get the currently protected SEXP.
1039    #[inline]
1040    pub fn get(&self) -> SEXP {
1041        self.cur.get()
1042    }
1043
1044    /// Replace the protected value in-place using `R_Reprotect`.
1045    ///
1046    /// The new value `x` becomes protected in this slot, and the old value
1047    /// is no longer protected (but may still be rooted elsewhere).
1048    ///
1049    /// Returns the raw SEXP for convenience. Note that this SEXP is only
1050    /// protected until the next call to `set()` on this slot - if you need
1051    /// to hold multiple protected values simultaneously, use separate
1052    /// protection slots or `OwnedProtect`.
1053    ///
1054    /// # Safety
1055    ///
1056    /// - Must be called from the R main thread
1057    /// - `x` must be a valid SEXP
1058    #[inline]
1059    pub unsafe fn set(&self, x: SEXP) -> SEXP {
1060        unsafe { R_Reprotect(x, self.idx) };
1061        self.cur.set(x);
1062        x
1063    }
1064
1065    /// Allocate a new value via the closure and replace this slot's value safely.
1066    ///
1067    /// This method encodes the safe pattern for replacing a protected slot with
1068    /// a newly allocated value. It:
1069    ///
1070    /// 1. Calls the closure `f()` to allocate a new SEXP
1071    /// 2. Temporarily protects the new value (to close the GC gap)
1072    /// 3. Calls `R_Reprotect` to replace this slot's value
1073    /// 4. Unprotects the temporary protection
1074    ///
1075    /// This prevents the GC gap that would exist if you called `f()` and then
1076    /// `set()` separately - during that window, the newly allocated value would
1077    /// be unprotected.
1078    ///
1079    /// # Safety
1080    ///
1081    /// - Must be called from the R main thread
1082    /// - The closure must return a valid SEXP
1083    ///
1084    /// # Example
1085    ///
1086    /// ```ignore
1087    /// unsafe fn grow_list(scope: &ProtectScope, old_list: SEXP) -> SEXP {
1088    ///     let slot = scope.protect_with_index(old_list);
1089    ///
1090    ///     // Safely grow the list without GC gap
1091    ///     slot.set_with(|| {
1092    ///         let new_list = Rf_allocVector(VECSXP, new_size);
1093    ///         // copy elements from old_list to new_list...
1094    ///         new_list
1095    ///     });
1096    ///
1097    ///     slot.get()
1098    /// }
1099    /// ```
1100    #[inline]
1101    pub unsafe fn set_with<F>(&self, f: F) -> SEXP
1102    where
1103        F: FnOnce() -> SEXP,
1104    {
1105        // Allocate the new value
1106        let new_value = f();
1107
1108        // Temporarily protect the new value to close the GC gap
1109        let temp = unsafe { Rf_protect(new_value) };
1110
1111        // Replace this slot's value with the new value
1112        unsafe { R_Reprotect(temp, self.idx) };
1113        self.cur.set(temp);
1114
1115        // Remove the temporary protection (slot now owns the protection)
1116        unsafe { Rf_unprotect(1) };
1117
1118        temp
1119    }
1120
1121    /// Take the current value and clear the slot to `R_NilValue`.
1122    ///
1123    /// This provides `Option::take`-like semantics. The slot remains allocated
1124    /// (protect stack depth unchanged), but now holds `R_NilValue` (immortal).
1125    ///
1126    /// # Safety
1127    ///
1128    /// - Must be called from the R main thread
1129    /// - The returned SEXP is **unprotected**. If it needs to survive further
1130    ///   allocations, you must protect it explicitly.
1131    ///
1132    /// # Example
1133    ///
1134    /// ```ignore
1135    /// let slot = scope.protect_with_index(some_value);
1136    /// // ... work with slot.get() ...
1137    /// let old = slot.take();  // slot now holds R_NilValue
1138    /// // old is unprotected - protect it if needed
1139    /// let guard = OwnedProtect::new(old);
1140    /// ```
1141    #[inline]
1142    pub unsafe fn take(&self) -> SEXP {
1143        let old = self.cur.get();
1144        let nil = SEXP::nil();
1145        unsafe { R_Reprotect(nil, self.idx) };
1146        self.cur.set(nil);
1147        old
1148    }
1149
1150    /// Replace the slot's value with `x` and return the old value.
1151    ///
1152    /// This provides `Option::replace`-like semantics. The slot now protects
1153    /// `x`, and the old value is returned **unprotected**.
1154    ///
1155    /// # Safety
1156    ///
1157    /// - Must be called from the R main thread
1158    /// - `x` must be a valid SEXP
1159    /// - The returned SEXP is **unprotected**. If it needs to survive further
1160    ///   allocations, you must protect it explicitly.
1161    ///
1162    /// # Example
1163    ///
1164    /// ```ignore
1165    /// let slot = scope.protect_with_index(initial);
1166    /// let old = slot.replace(new_value);
1167    /// // old is unprotected, slot now protects new_value
1168    /// ```
1169    #[inline]
1170    pub unsafe fn replace(&self, x: SEXP) -> SEXP {
1171        let old = self.cur.get();
1172        unsafe { R_Reprotect(x, self.idx) };
1173        self.cur.set(x);
1174        old
1175    }
1176
1177    /// Clear the slot by setting it to `R_NilValue`.
1178    ///
1179    /// The slot remains allocated (protect stack depth unchanged), but releases
1180    /// its reference to the previous value. The previous value may still be
1181    /// rooted elsewhere.
1182    ///
1183    /// # Safety
1184    ///
1185    /// Must be called from the R main thread.
1186    #[inline]
1187    pub unsafe fn clear(&self) {
1188        let nil = SEXP::nil();
1189        unsafe { R_Reprotect(nil, self.idx) };
1190        self.cur.set(nil);
1191    }
1192
1193    /// Check if the slot is currently cleared (holds `R_NilValue`).
1194    ///
1195    /// # Safety
1196    ///
1197    /// Must be called from the R main thread (accesses R's `R_NilValue`).
1198    #[inline]
1199    pub unsafe fn is_nil(&self) -> bool {
1200        self.cur.get() == SEXP::nil()
1201    }
1202}
1203
1204// NOTE: Deref was intentionally removed to avoid UB.
1205// The previous impl fabricated `&SEXP` from `Cell<SEXP>` via pointer cast,
1206// which violates Cell's aliasing rules if `set()` is called while a
1207// reference is live. Use `get()` instead, which returns SEXP by value.
1208// endregion
1209
1210pub mod tls;
1211
1212// region: WorkerUnprotectGuard — Send-safe unprotect for worker threads
1213
1214/// A `Send`-safe guard that calls `Rf_unprotect(n)` on drop via `with_r_thread`.
1215///
1216/// Use this when you `Rf_protect` on the R main thread, then need the unprotect
1217/// to happen when a guard drops on a **worker thread** (e.g., rayon parallel code).
1218///
1219/// [`OwnedProtect`] and [`ProtectScope`] are `!Send` — they can only be used on
1220/// the R main thread. `WorkerUnprotectGuard` fills the gap for cross-thread patterns
1221/// where allocation + protect happen on the R thread but the guard lives on a worker.
1222///
1223/// # Example
1224///
1225/// ```ignore
1226/// use miniextendr_api::gc_protect::WorkerUnprotectGuard;
1227///
1228/// let sexp = with_r_thread(|| unsafe {
1229///     let sexp = Rf_allocVector(REALSXP, n);
1230///     Rf_protect(sexp);
1231///     sexp
1232/// });
1233/// let _guard = WorkerUnprotectGuard::new(1);
1234///
1235/// // ... parallel work on sexp's data ...
1236/// // _guard drops here, dispatching Rf_unprotect(1) back to R thread
1237/// ```
1238pub struct WorkerUnprotectGuard(i32);
1239
1240impl WorkerUnprotectGuard {
1241    /// Create a guard that will unprotect `n` entries on drop.
1242    #[inline]
1243    pub fn new(n: i32) -> Self {
1244        Self(n)
1245    }
1246}
1247
1248impl Drop for WorkerUnprotectGuard {
1249    fn drop(&mut self) {
1250        let n = self.0;
1251        crate::worker::with_r_thread(move || unsafe {
1252            crate::ffi::Rf_unprotect_unchecked(n);
1253        });
1254    }
1255}
1256
1257// Safety: no SEXP field, just an integer count. The actual Rf_unprotect call
1258// is dispatched to the R main thread via with_r_thread.
1259unsafe impl Send for WorkerUnprotectGuard {}
1260// endregion
1261
1262// region: Typed Vector Collection
1263
1264// NOTE: Typed vectors (INTSXP, REALSXP, RAWSXP, LGLSXP, CPLXSXP) do NOT need
1265// complex protection patterns during construction. You allocate once, protect
1266// once, then fill by writing directly to the data pointer. No GC can occur
1267// during the fill because you're just doing pointer writes - no R allocations.
1268//
1269// Only STRSXP (character vectors) and VECSXP (lists) need the ReprotectSlot
1270// pattern because each element insertion might allocate (mkChar, etc.).
1271//
1272// For typed vectors with unknown length, just collect to Vec<T> first, then
1273// allocate the exact size. The brief doubling of memory is fine.
1274// endregion
1275
1276// region: Tests
1277
1278#[cfg(test)]
1279mod tests {
1280    use super::*;
1281
1282    // Note: These tests primarily verify compilation and basic invariants.
1283    // Full integration testing requires R to be initialized.
1284    // endregion
1285
1286    // region: Basic invariants
1287
1288    #[test]
1289    fn protect_scope_has_nosend_marker() {
1290        // Verify the NoSendSync marker type is present
1291        // (ProtectScope contains PhantomData<Rc<()>> which makes it !Send + !Sync)
1292        let _: NoSendSync = PhantomData;
1293    }
1294
1295    #[test]
1296    fn protect_scope_default_count_is_zero() {
1297        let scope = ProtectScope::default();
1298        assert_eq!(scope.count(), 0);
1299    }
1300
1301    #[test]
1302    fn root_is_copy() {
1303        fn assert_copy<T: Copy>() {}
1304        assert_copy::<Root<'static>>();
1305    }
1306
1307    #[test]
1308    fn tls_root_is_copy() {
1309        fn assert_copy<T: Copy>() {}
1310        assert_copy::<tls::TlsRoot>();
1311    }
1312    // endregion
1313
1314    // region: Threading: compile-time !Send + !Sync checks
1315
1316    #[test]
1317    fn protect_scope_is_not_send() {
1318        fn assert_not_send<T>()
1319        where
1320            T: ?Sized,
1321        {
1322            // This test passes if ProtectScope is !Send
1323            // We can't directly assert !Send, but the type containing Rc<()> ensures it
1324        }
1325        assert_not_send::<ProtectScope>();
1326    }
1327
1328    #[test]
1329    fn protect_scope_is_not_sync() {
1330        fn assert_not_sync<T>()
1331        where
1332            T: ?Sized,
1333        {
1334            // This test passes if ProtectScope is !Sync
1335        }
1336        assert_not_sync::<ProtectScope>();
1337    }
1338
1339    #[test]
1340    fn owned_protect_is_not_send() {
1341        fn assert_not_send<T>()
1342        where
1343            T: ?Sized,
1344        {
1345        }
1346        assert_not_send::<OwnedProtect>();
1347    }
1348
1349    // Note: We can't easily assert !Send/!Sync at compile time without
1350    // negative trait bounds. The PhantomData<Rc<()>> marker ensures these types
1351    // are !Send and !Sync. If you need compile-time verification, use the
1352    // static_assertions crate with `assert_not_impl_any!`.
1353    // endregion
1354
1355    // region: TLS scope tests
1356
1357    #[test]
1358    fn tls_no_active_scope_by_default() {
1359        assert!(!tls::has_active_scope());
1360        assert_eq!(tls::current_count(), None);
1361        assert_eq!(tls::scope_depth(), 0);
1362    }
1363
1364    #[test]
1365    fn tls_scope_depth_tracking() {
1366        // Without R, we can only test the TLS tracking logic
1367        // The actual protect/unprotect requires R runtime
1368
1369        // Test that scope depth is tracked correctly
1370        assert_eq!(tls::scope_depth(), 0);
1371
1372        // We can't fully test with_protect_scope without R initialized,
1373        // but we can verify the API compiles and the TLS logic works
1374    }
1375
1376    #[test]
1377    #[should_panic(expected = "tls::protect called outside of with_protect_scope")]
1378    fn tls_protect_panics_outside_scope() {
1379        // This should panic because there's no active scope
1380        // Note: Can't actually call protect without R, but we test the panic message
1381        unsafe {
1382            let _ = tls::protect(crate::ffi::SEXP(std::ptr::null_mut()));
1383        }
1384    }
1385    // endregion
1386
1387    // region: Escape hatch tests
1388
1389    #[test]
1390    fn disarm_prevents_unprotect() {
1391        let scope = ProtectScope::default();
1392        assert!(scope.armed.get());
1393
1394        unsafe { scope.disarm() };
1395        assert!(!scope.armed.get());
1396
1397        // Scope will drop without calling Rf_unprotect (can't test actual R call)
1398    }
1399
1400    #[test]
1401    fn rearm_restores_unprotect() {
1402        let scope = ProtectScope::default();
1403
1404        unsafe {
1405            scope.disarm();
1406            assert!(!scope.armed.get());
1407
1408            scope.rearm();
1409            assert!(scope.armed.get());
1410        }
1411    }
1412    // endregion
1413
1414    // region: Counter tracking tests
1415
1416    #[test]
1417    fn scope_counter_starts_at_zero() {
1418        let scope = ProtectScope::default();
1419        assert_eq!(scope.count(), 0);
1420    }
1421
1422    // Note: The following tests require R to be initialized and would be
1423    // integration tests rather than unit tests:
1424    //
1425    // - Balance test: protect N, verify unprotect(N) on drop (gctorture)
1426    // - Nested scopes: verify drop order yields correct net unprotect
1427    // - Reprotect slot: verify set() many times keeps count at +1
1428    //
1429    // These should be tested in miniextendr-api/tests/gc_protect.rs with
1430    // embedded R.
1431    // endregion
1432}
1433// endregion